Navigation device and method relating to an audible recognition mode

A method and device are disclosed for navigation. In at least one embodiment, the method includes receiving an indication of enablement of an audible recognition mode in a navigation device; determining, subsequent to receiving an indication of enablement of the audible recognition mode and subsequent to receiving an audible input, at least one choice relating to address information of a travel destination based upon the received audible input; audibly outputting at least one determined choice relating to address information of a travel destination; and acknowledging selection of the audibly output at least one determined choice upon receiving an affirmative audible input. In at least one embodiment, the navigation device includes a processor to receive an indication of enablement of an audible recognition mode in a navigation device and to determine, subsequent to receiving an audible input, at least one choice relating to address information of a travel destination based upon the received audible input; and an output device to audibly output at least one determined choice relating to address information of a travel destination, the processor being further useable to acknowledge selection of the audibly output at least one determined choice upon receiving an affirmative audible input.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CO-PENDING APPLICATIONS

The following applications are being filed concurrently with the present application. The entire contents of each of the following applications is hereby incorporated herein by reference: A NAVIGATION DEVICE AND METHOD FOR EARLY INSTRUCTION OUTPUT (Attorney docket number 06P207US01) filed on even date herewith; A NAVIGATION DEVICE AND METHOD FOR ESTABLISHING AND USING PROFILES (Attorney docket number 06P207US02) filed on even date herewith; A NAVIGATION DEVICE AND METHOD FOR ENHANCED MAP DISPLAY (Attorney docket number 06P207US03) filed on even date herewith; NAVIGATION DEVICE AND METHOD FOR PROVIDING POINTS OF INTEREST (Attorney docket number 06P207US05) filed on even date herewith; A NAVIGATION DEVICE AND METHOD FOR FUEL PRICING DISPLAY (Attorney docket number 06P057US06) filed on even date herewith; A NAVIGATION DEVICE AND METHOD FOR INFORMATIONAL SCREEN DISPLAY (Attorney docket number 06P207US06) filed on even date herewith; A NAVIGATION DEVICE AND METHOD FOR DEALING WITH LIMITED ACCESS ROADS (Attorney docket number 06P057US07) filed on even date herewith; A NAVIGATION DEVICE AND METHOD FOR TRAVEL WARNINGS (Attorney docket number 06P057US07) filed on even date herewith; A NAVIGATION DEVICE AND METHOD FOR DRIVING BREAK WARNING (Attorney docket number 06P057US07) filed on even date herewith; A NAVIGATION DEVICE AND METHOD FOR ISSUING WARNINGS (Attorney docket number 06P207US07) filed on even date herewith; A NAVIGATION DEVICE AND METHOD FOR DISPLAY OF POSITION IN TEXT READIBLE FORM (Attorney docket number 06P207US08) filed on even date herewith; A NAVIGATION DEVICE AND METHOD FOR EMERGENCY SERVICE ACCESS (Attorney docket number 06P057US08) filed on even date herewith; A NAVIGATION DEVICE AND METHOD FOR PROVIDING REGIONAL TRAVEL INFORMATION IN A NAVIGATION DEVICE (Attorney docket number 06P207US09) filed on even date herewith; A NAVIGATION DEVICE AND METHOD FOR USING SPECIAL CHARACTERS IN A NAVIGATION DEVICE (Attorney docket number 06P207US09) filed on even date herewith; A NAVIGATION DEVICE AND METHOD USING A PERSONAL AREA NETWORK (Attorney docket number 06P207US10) filed on even date herewith; A NAVIGATION DEVICE AND METHOD USING A LOCATION MESSAGE (Attorney docket number 06P207US10) filed on even date herewith; A NAVIGATION DEVICE AND METHOD FOR CONSERVING POWER (Attorney docket number 06P207US11) filed on even date herewith; A NAVIGATION DEVICE AND METHOD FOR USING A TRAFFIC MESSAGE CHANNEL (Attorney docket number 06P207US13) filed on even date herewith; A NAVIGATION DEVICE AND METHOD FOR USING A TRAFFIC MESSAGE CHANNEL RESOURCE (Attorney docket number 06P207US13) filed on even date herewith; A NAVIGATION DEVICE AND METHOD FOR QUICK OPTION ACCESS (Attorney docket number 06P207US15) filed on even date herewith; A NAVIGATION DEVICE AND METHOD FOR DISPLAYING A RICH CONTENT DOCUMENT (Attorney docket number 06P207US27) filed on even date herewith.

PRIORITY STATEMENT

The present application hereby claims priority under 35 U.S.C. §119(e) on each of U.S. Provisional Patent Application No. 60/879,523 filed Jan. 10, 2007, 60/879,549 filed Jan. 10, 2007, 60/879,553 filed Jan. 10, 2007, 60/879,577 filed Jan. 10, 2007, and 60/879,599 filed Jan. 10, 2007, the entire contents of each of which is hereby incorporated herein by reference.

FIELD

The present application generally relates to navigation methods and devices.

BACKGROUND

Navigation devices were traditionally utilized mainly in the areas of vehicle use, such as on cars, motorcycles, trucks, boats, etc. Alternatively, if such navigation devices were portable, they were further transferable between vehicles and/or useable outside the vehicle, for foot travel for example.

These devices are typically tailored to produce a route of travel based upon an initial position of the navigation device and a selected/input travel destination (end position), noting that the initial position could be entered into the device, but is traditionally calculated via GPS Positioning from a GPS receiver within the navigation device. To aid in navigation of the route, instructions are output along the route to a user of the navigation device. These instructions may be a least one of audible and visual.

SUMMARY

The inventors discovered that users of navigation devices may have some difficulty in operating and viewing touch panel screens. Thus, the inventors discovered that user's desire at least limited hands free access, especially when using the navigation device in a vehicle. As such, the inventors developed methods which allow hands-free or at least partial hands free access by utilizing an audible recognition mode.

In at least one embodiment of the present application, a method includes receiving an indication of enablement of an audible recognition mode in a navigation device; determining, subsequent to receiving an indication of enablement of the audible recognition mode and subsequent to receiving an audible input, at least one choice relating to address information of a travel destination based upon the received audible input; audibly outputting at least one determined choice relating to address information of a travel destination; and acknowledging selection of the audibly output at least one determined choice upon receiving an affirmative audible input.

In at least one embodiment of the present application, a navigation device includes a processor to receive an indication of enablement of an audible recognition mode in a navigation device and to determine, subsequent to receiving an audible input, at least one choice relating to address information of a travel destination based upon the received audible input; and an output device to audibly output at least one determined choice relating to address information of a travel destination, the processor being further useable to acknowledge selection of the audibly output at least one determined choice upon receiving an affirmative audible input.

In at least one other embodiment of the present application, a method includes receiving an indication of enablement of an audible recognition mode in a navigation device; and displaying on an integrated input and display device, subsequent to receiving an indication of enablement of the audible recognition mode, an indication as to whether a volume of a received audible input is within an acceptable range, louder than the acceptable range, and softer than the acceptable range.

In at least one other embodiment of the present application, a navigation device includes a processor to receive an indication of enablement of an audible recognition mode in a navigation device; and an integrated input and display device to display, subsequent to the processor receiving an indication of enablement of the audible recognition mode, an indication as to whether a volume of a received audible input is within an acceptable range, louder than the acceptable range, and softer than the acceptable range.

In at least one other embodiment of the present application, a method includes receiving an indication of enablement of an audible recognition mode in a navigation device; receiving additional information from a source other than a user of the navigation device; formulating a question, answerable by a yes or no answer from the user, based upon the received additional information; and outputting the formulated question to the user.

In at least one other embodiment of the present application, a navigation device includes a processor to receive an indication of enablement of an audible recognition mode, to receive additional information from a source other than a user of the navigation device, and to formulate a question, answerable by a yes or no answer from the user, based upon the received additional information; and an output device to output the formulated question to the use.

BRIEF DESCRIPTION OF THE DRAWINGS

The present application will be described in more detail below by using example embodiments, which will be explained with the aid of the drawings, in which:

FIG. 1 illustrates an example view of a Global Positioning System (GPS);

FIG. 2 illustrates an example block diagram of electronic components of a navigation device of an embodiment of the present application;

FIG. 3 illustrates an example block diagram of a server, navigation device and connection therebetween of an embodiment of the present application;

FIGS. 4A and 4B are perspective views of an implementation of an embodiment of the navigation device;

FIG. 5 illustrates a flow chart of an embodiment of a method of the present application;

FIGS. 6A-D are examples of audible recognition mode icons for display in an embodiment of the present application;

FIG. 7 illustrates an example chart of an embodiment of the present application;

FIG. 8 illustrates a flow chart of an embodiment of a method of the present application; and

FIG. 9 illustrates a flow chart of an embodiment of a method of the present application.

DETAILED DESCRIPTION OF THE EXAMPLE EMBODIMENTS

The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the present invention. As used herein, the singular forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “includes” and/or “including”, when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.

In describing example embodiments illustrated in the drawings, specific terminology is employed for the sake of clarity. However, the disclosure of this patent specification is not intended to be limited to the specific terminology so selected and it is to be understood that each specific element includes all technical equivalents that operate in a similar manner.

Referencing the drawings, wherein like reference numerals designate identical or corresponding parts throughout the several views, example embodiments of the present patent application are hereafter described. Like numbers refer to like elements throughout. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.

FIG. 1 illustrates an example view of Global Positioning System (GPS), usable by navigation devices, including the navigation device of embodiments of the present application. Such systems are known and are used for a variety of purposes. In general, GPS is a satellite-radio based navigation system capable of determining continuous position, velocity, time, and in some instances direction information for an unlimited number of users.

Formerly known as NAVSTAR, the GPS incorporates a plurality of satellites which work with the earth in extremely precise orbits. Based on these precise orbits, GPS satellites can relay their location to any number of receiving units.

The GPS system is implemented when a device, specially equipped to receive GPS data, begins scanning radio frequencies for GPS satellite signals. Upon receiving a radio signal from a GPS satellite, the device determines the precise location of that satellite via one of a plurality of different conventional methods. The device will continue scanning, in most instances, for signals until it has acquired at least three different satellite signals (noting that position is not normally, but can be determined, with only two signals using other triangulation techniques). Implementing geometric triangulation, the receiver utilizes the three known positions to determine its own two-dimensional position relative to the satellites. This can be done in a known manner. Additionally, acquiring a fourth satellite signal will allow the receiving device to calculate its three dimensional position by the same geometrical calculation in a known manner. The position and velocity data can be updated in real time on a continuous basis by an unlimited number of users.

As shown in FIG. 1, the GPS system is denoted generally by reference numeral 100. A plurality of satellites 120 are in orbit about the earth 124. The orbit of each satellite 120 is not necessarily synchronous with the orbits of other satellites 120 and, in fact, is likely asynchronous. A GPS receiver 140, usable in embodiments of navigation devices of the present application, is shown receiving spread spectrum GPS satellite signals 160 from the various satellites 120.

The spread spectrum signals 160, continuously transmitted from each satellite 120, utilize a highly accurate frequency standard accomplished with an extremely accurate atomic clock. Each satellite 120, as part of its data signal transmission 160, transmits a data stream indicative of that particular satellite 120. It is appreciated by those skilled in the relevant art that the GPS receiver device 140 generally acquires spread spectrum GPS satellite signals 160 from at least three satellites 120 for the GPS receiver device 140 to calculate its two-dimensional position by triangulation. Acquisition of an additional signal, resulting in signals 160 from a total of four satellites 120, permits the GPS receiver device 140 to calculate its three-dimensional position in a known manner.

FIG. 2 illustrates an example block diagram of electronic components of a navigation device 200 of an embodiment of the present application, in block component format. It should be noted that the block diagram of the navigation device 200 is not inclusive of all components of the navigation device, but is only representative of many example components.

The navigation device 200 is located within a housing (not shown). The housing includes a processor 210 connected to an input device 220 and a display screen 240. The input device 220 can include a keyboard device, voice input device, touch panel and/or any other known input device utilized to input information; and the display screen 240 can include any type of display screen such as an LCD display, for example. In at least one embodiment of the present application, the input device 220 and display screen 240 are integrated into an integrated input and display device, including a touchpad or touchscreen input wherein a user need only touch a portion of the display screen 240 to select one of a plurality of display choices or to activate one of a plurality of virtual buttons.

In addition, other types of output devices 241 can also include, including but not limited to, an audible output device. As output device 241 can produce audible information to a user of the navigation device 200, it is equally understood that input device 240 can also include a microphone and software for receiving input voice commands as well.

In the navigation device 200, processor 210 is operatively connected to and set to receive input information from input device 240 via a connection 225, and operatively connected to at least one of display screen 240 and output device 241, via output connections 245, to output information thereto. Further, the processor 210 is operatively connected to memory 230 via connection 235 and is further adapted to receive/send information from/to input/output (I/O) ports 270 via connection 275, wherein the I/O port 270 is connectible to an I/O device 280 external to the navigation device 200. The external I/O device 270 may include, but is not limited to an external listening device such as an earpiece for example. The connection to I/O device 280 can further be a wired or wireless connection to any other external device such as a car stereo unit for hands-free operation and/or for voice activated operation for example, for connection to an ear piece or head phones, and/or for connection to a mobile phone for example, wherein the mobile phone connection may be used to establish a data connection between the navigation device 200 and the internet or any other network for example, and/or to establish a connection to a server via the internet or some other network for example.

The navigation device 200, in at least one embodiment, may establish a “mobile” network connection with the server 302 via a mobile device 400 (such as a mobile phone, PDA, and/or any device with mobile phone technology) establishing a digital connection (such as a digital connection via known Bluetooth technology for example). Thereafter, through its network service provider, the mobile device 400 can establish a network connection (through the internet for example) with a server 302. As such, a “mobile” network connection is established between the navigation device 200 (which can be, and often times is mobile as it travels alone and/or in a vehicle) and the server 302 to provide a “real-time” or at least very “up to date” gateway for information.

The establishing of the network connection between the mobile device 400 (via a service provider) and another device such as the server 302, using the internet 410 for example, can be done in a known manner. This can include use of TCP/IP layered protocol for example. The mobile device 400 can utilize any number of communication standards such as CDMA, GSM, WAN, etc.

As such, an internet connection may be utilized which is achieved via data connection, via a mobile phone or mobile phone technology within the navigation device 200 for example. For this connection, an internet connection between the server 302 and the navigation device 200 is established. This can be done, for example, through a mobile phone or other mobile device and a GPRS (General Packet Radio Service)-connection (GPRS connection is a high-speed data connection for mobile devices provided by telecom operators; GPRS is a method to connect to the internet.

The navigation device 200 can further complete a data connection with the mobile device 400, and eventually with the internet 410 and server 302, via existing Bluetooth technology for example, in a known manner, wherein the data protocol can utilize any number of standards, such as the GSRM, the Data Protocol Standard for the GSM standard, for example.

The navigation device 200 may include its own mobile phone technology within the navigation device 200 itself (including an antenna for example, wherein the internal antenna of the navigation device 200 can further alternatively be used). The mobile phone technology within the navigation device 200 can include internal components as specified above, and/or can include an insertable card, complete with necessary mobile phone technology and/or an antenna for example. As such, mobile phone technology within the navigation device 200 can similarly establish a network connection between the navigation device 200 and the server 302, via the internet 410 for example, in a manner similar to that of any mobile device 400.

For GRPS phone settings, the Bluetooth enabled device may be used to correctly work with the ever changing spectrum of mobile phone models, manufacturers, etc., model/manufacturer specific settings may be stored on the navigation device 200 for example. The data stored for this information can be updated in a manner discussed in any of the embodiments, previous and subsequent.

FIG. 2 further illustrates an operative connection between the processor 210 and an antenna/receiver 250 via connection 255, wherein the antenna/receiver 250 can be a GPS antenna/receiver for example. It will be understood that the antenna and receiver designated by reference numeral 250 are combined schematically for illustration, but that the antenna and receiver may be separately located components, and that the antenna may be a GPS patch antenna or helical antenna for example.

Further, it will be understood by one of ordinary skill in the art that the electronic components shown in FIG. 2 are powered by power sources (not shown) in a conventional manner. As will be understood by one of ordinary skill in the art, different configurations of the components shown in FIG. 2 are considered within the scope of the present application. For example, in one embodiment, the components shown in FIG. 2 may be in communication with one another via wired and/or wireless connections and the like. Thus, the scope of the navigation device 200 of the present application includes a portable or handheld navigation device 200.

In addition, the portable or handheld navigation device 200 of FIG. 2 can be connected or “docked” in a known manner to a motorized vehicle such as a car or boat for example. Such a navigation device 200 is then removable from the docked location for portable or handheld navigation use.

FIG. 3 illustrates an example block diagram of a server 302 and a navigation device 200 of the present application, via a generic communications channel 318, of an embodiment of the present application. The server 302 and a navigation device 200 of the present application can communicate when a connection via communications channel 318 is established between the server 302 and the navigation device 200 (noting that such a connection can be a data connection via mobile device, a direct connection via personal computer via the internet, etc.).

The server 302 includes, in addition to other components which may not be illustrated, a processor 304 operatively connected to a memory 306 and further operatively connected, via a wired or wireless connection 314, to a mass data storage device 312. The processor 304 is further operatively connected to transmitter 308 and receiver 310, to transmit and send information to and from navigation device 200 via communications channel 318. The signals sent and received may include data, communication, and/or other propagated signals. The transmitter 308 and receiver 310 may be selected or designed according to the communications requirement and communication technology used in the communication design for the navigation system 200. Further, it should be noted that the functions of transmitter 308 and receiver 310 may be combined into a signal transceiver.

Server 302 is further connected to (or includes) a mass storage device 312, noting that the mass storage device 312 may be coupled to the server 302 via communication link 314. The mass storage device 312 contains a store of navigation data and map information, and can again be a separate device from the server 302 or can be incorporated into the server 302.

The navigation device 200 is adapted to communicate with the server 302 through communications channel 318, and includes processor, memory, etc. as previously described with regard to FIG. 2, as well as transmitter 320 and receiver 322 to send and receive signals and/or data through the communications channel 318, noting that these devices can further be used to communicate with devices other than server 302. Further, the transmitter 320 and receiver 322 are selected or designed according to communication requirements and communication technology used in the communication design for the navigation device 200 and the functions of the transmitter 320 and receiver 322 may be combined into a single transceiver.

Software stored in server memory 306 provides instructions for the processor 304 and allows the server 302 to provide services to the navigation device 200. One service provided by the server 302 involves processing requests from the navigation device 200 and transmitting navigation data from the mass data storage 312 to the navigation device 200. According to at least one embodiment of the present application, another service provided by the server 302 includes processing the navigation data using various algorithms for a desired application and sending the results of these calculations to the navigation device 200.

The communication channel 318 generically represents the propagating medium or path that connects the navigation device 200 and the server 302. According to at least one embodiment of the present application, both the server 302 and navigation device 200 include a transmitter for transmitting data through the communication channel and a receiver for receiving data that has been transmitted through the communication channel.

The communication channel 318 is not limited to a particular communication technology. Additionally, the communication channel 318 is not limited to a single communication technology; that is, the channel 318 may include several communication links that use a variety of technology. For example, according to at least one embodiment, the communication channel 318 can be adapted to provide a path for electrical, optical, and/or electromagnetic communications, etc. As such, the communication channel 318 includes, but is not limited to, one or a combination of the following: electric circuits, electrical conductors such as wires and coaxial cables, fiber optic cables, converters, radio-frequency (rf) waves, the atmosphere, empty space, etc. Furthermore, according to at least one various embodiment, the communication channel 318 can include intermediate devices such as routers, repeaters, buffers, transmitters, and receivers, for example.

In at least one embodiment of the present application, for example, the communication channel 318 includes telephone and computer networks. Furthermore, in at least one embodiment, the communication channel 318 may be capable of accommodating wireless communication such as radio frequency, microwave frequency, infrared communication, etc. Additionally, according to at least one embodiment, the communication channel 318 can accommodate satellite communication.

The communication signals transmitted through the communication channel 318 include, but are not limited to, signals as may be required or desired for given communication technology. For example, the signals may be adapted to be used in cellular communication technology such as Time Division Multiple Access (TDMA), Frequency Division Multiple Access (FDMA), Code Division Multiple Access (CDMA), Global System for Mobile Communications (GSM), etc. Both digital and analogue signals can be transmitted through the communication channel 318. According to at least one embodiment, these signals may be modulated, encrypted and/or compressed signals as may be desirable for the communication technology.

The mass data storage 312 includes sufficient memory for the desired navigation applications. Examples of the mass data storage 312 may include magnetic data storage media such as hard drives for example, optical storage media such as CD-Roms for example, charged data storage media such as flash memory for example, molecular memory, etc.

According to at least one embodiment of the present application, the server 302 includes a remote server accessible by the navigation device 200 via a wireless channel. According to at least one other embodiment of the application, the server 302 may include a network server located on a local area network (LAN), wide area network (WAN), virtual private network (VPN), etc.

According to at least one embodiment of the present application, the server 302 may include a personal computer such as a desktop or laptop computer, and the communication channel 318 may be a cable connected between the personal computer and the navigation device 200. Alternatively, a personal computer may be connected between the navigation device 200 and the server 302 to establish an internet connection between the server 302 and the navigation device 200. Alternatively, a mobile telephone or other handheld device may establish a wireless connection to the internet, for connecting the navigation device 200 to the server 302 via the internet.

The navigation device 200 may be provided with information from the server 302 via information downloads which may be periodically updated upon a user connecting navigation device 200 to the server 302 and/or may be more dynamic upon a more constant or frequent connection being made between the server 302 and navigation device 200 via a wireless mobile connection device and TCP/IP connection for example. For many dynamic calculations, the processor 304 in the server 302 may be used to handle the bulk of the processing needs; however, processor 210 of navigation device 200 can also handle much processing and calculation, oftentimes independent of a connection to a server 302.

The mass storage device 312 connected to the server 302 can include volumes more cartographic and route data than that which is able to be maintained on the navigation device 200 itself, including maps, etc. The server 302 may process, for example, the majority of the devices of a navigation device 200 which travel along the route using a set of processing algorithms. Further, the cartographic and route data stored in memory 312 can operate on signals (e.g. GPS signals), originally received by the navigation device 200.

As indicated above in FIG. 2 of the application, a navigation device 200 of an embodiment of the present application includes a processor 210, an input device 220, and a display screen 240. In at least one embodiment, the input device 220 and display screen 240 are integrated into an integrated input and display device to enable both input of information (via direct input, menu selection, etc.) and display of information through a touch panel screen, for example. Such a screen may be a touch input LCD screen, for example, as is well known to those of ordinary skill in the art. Further, the navigation device 200 can also include any additional input device 220 and/or any additional output device 241, such as audio input/output devices for example.

FIGS. 4A and 4B are perspective views of an actual implementation of an embodiment of the navigation device 200. As shown in FIG. 4A, the navigation device 200 may be a unit that includes an integrated input and display device 290 (a touch panel screen for example) and the other components of FIG. 2 (including but not limited to internal GPS receiver 250, microprocessor 210, a power supply, memory systems 220, etc.).

The navigation device 200 may sit on an arm 292, which itself may be secured to a vehicle dashboard/window/etc. using a large suction cup 294. This arm 292 is one example of a docking station to which the navigation device 200 can be docked.

As shown in FIG. 4B, the navigation device 200 can be docked or otherwise connected to an arm 292 of the docking station by snap connecting the navigation device 292 to the arm 292 for example (this is only one example, as other known alternatives for connection to a docking station are within the scope of the present application). The navigation device 200 may then be rotatable on the arm 292, as shown by the arrow of FIG. 4B. To release the connection between the navigation device 200 and the docking station, a button on the navigation device 200 may be pressed, for example (this is only one example, as other known alternatives for disconnection to a docking station are within the scope of the present application).

In at least one embodiment of the present application, a method includes receiving an indication of enablement of an audible recognition mode in a navigation device 200; determining, subsequent to receiving an indication of enablement of the audible recognition mode and subsequent to receiving an audible input, at least one choice relating to address information of a travel destination based upon the received audible input; audibly outputting at least one determined choice relating to address information of a travel destination; and acknowledging selection of the audibly output at least one determined choice upon receiving an affirmative audible input.

In at least one embodiment of the present application, a navigation device 200 includes a processor 210 to receive an indication of enablement of an audible recognition mode in a navigation device 200 and to determine, subsequent to receiving an audible input, at least one choice relating to address information of a travel destination based upon the received audible input; and an output device 241 to audibly output at least one determined choice relating to address information of a travel destination, the processor 210 being further useable to acknowledge selection of the audibly output at least one choice upon receiving an affirmative audible input.

FIG. 5 illustrates a flowchart of an example embodiment of the present application. In the embodiment shown in FIG. 5, it is first determined is step S2 whether or not an audible recognition mode has been enabled in the navigation device. For example, as shown in FIG. 6A, an icon can be displayed on an integrated input and display device 290 of the navigation device 200. Such an icon can be displayed in an initial or subsequent menu for selection prior to input/ selection of a destination for establishing a route of travel and/or can be displayed along with map information, for example, during use of the navigation device in a navigation mode. This icon can include just a pictorial illustration, such as the lips shown in FIG. 6A, and/or can include text indicating that the button corresponds to an audible recognition mode, such as audible speech recognition (ASR). Upon a processor 210 of the navigation device 200 receiving an indication of selection of such an icon as shown in FIG. 6A, an audible recognition mode may be enabled by the processor 210.

An audible recognition mode can include the processor 210 working in conjunction with an ASR engine or module. Such an ASR engine or module is a software engine that, once an audible recognition mode is enabled as explained above, can be loaded with grammatical rules, in a language of the country of the user of the navigation device 200 (or selected by the user, for example) for example. Thus, a user of the navigation device 200 will typically enter/select a country in which the user is located, and the language of that country can then be selected, input or matched by the processor 210. Thereafter, the ASR engine can then be loaded with grammatical rules from memory 230, upon an audible recognition mode being enabled. The ASR engine can then use the language corresponding to the chosen map to recognize geographical names (city and street names, for example) and the current user selected/enabled language to recognize common speech. For example, the system may be set up to enable recognition of complex speech from the user, or may be limited to only simple replies of yes, no, done, back, and/or numerical entries such as 1, 2, 3, etc. are of

The ASR engine or module is one which enables a speech interface between the user and the navigation device 200. Such a module is typically not usable in a portable navigation device 200 such as that shown in FIGS. 2-4B of the present application, but embodiments of the present application improve or even optimize memory management between the processor 210 and memory devices 230 for example, as well as data structures, to allow the ASR module to handle and recognize input information. Essentially, all or most available memory in the memory device(s) 230 of the navigation device 200 are allocated to the ASR module during speech recognition; namely upon the audible recognition mode being enabled in step S2 of FIG. 5, while other processes of the processor 210 are put on hold. Of course, during use of the navigation device 200 in a navigation mode, certain processes devoted to display of navigation information and output of navigation instructions must continue, thus sometimes slowing down operation of the ASR module.

In one example embodiment of the present application, the ASR module is primarily utilized in selecting address information of a travel destination based upon received audible input, and thus typically operates at a time when the navigation device 200 is not in use in a navigation mode. Upon the navigation device 200 operating in a navigation mode, another embodiment of the present application involves formulating simple questions, answerable by a yes/no answer (for example) from the user, to thereby enable processing capacity to be allocated to the navigation mode, with only a small amount of processing capacity needed in the ASR module to recognize such yes/no answers from the user of the navigation device 200. Thus, although the process shown in FIG. 5 can operate during use of the navigation device 200 in the navigation mode, upon sufficient memory 230 being included in the navigation device 200 and/or upon the ASR module being used to recognize Yes/NO limited input information for example, the operation shown in FIG. 5 typically occurs before start of the vehicle in which the navigation device 200 is located, namely before a travel destination is input into the navigation device 200 and before a travel route is determined.

Referring back to FIG. 5, in step S2, if the audible recognition mode is not enabled, the system cycles back to repeat step S2. However, if the audible recognition mode is enabled, by the processor 210 receiving an indication of selection of the “talk to me” icon shown in FIG. 6A for example, language and grammar information is loaded into the ASR module of the navigation device 200 from memory 230 and the navigation device 200 merely awaits an audible input in step S4. If no audible input is received, the system merely cycles back to repeat step S4 until an audible input is received.

The ASR module is typically utilized to recognize speech information from different users. Such information is typically unpredictable, and therefore cannot be stored in memory 230. The ASR module or engine operates in conjunction with the processor 210 to convert received speech information to a sequence of phonemes in a known manner, and then works with processor 210 to match existing grammar of stored cities, street names, etc., to the converted sequence of phonemes.

In step S6, if an audible input is received, the processor 210 works with the ASR module to convert the input speech to phonemes and to compare the sequence of phonemes to stored information in memory 230 to determine at least one choice relating to address information of the travel destination based upon the received audible input. For example, in at least one embodiment, the at least one choice relating to address information of a travel destination can include a city name. Accordingly, a user may audibly output a name of a city as part of the address information of the travel destination, wherein the initial input of the city could be prompted by the navigation device 200 displaying a request, such as “In which city?” for example, to enter travel destination information. Upon receipt of this audible information, the processor 210 and ASR module process the phonemes as described above and compare this information to stored cities in memory 230 to determine at least one choice relating to the input audible sound, if possible. If nothing was recognized, the navigation device 200 may return to a screen to prompt input of the city or other address information, and may or may not flash or otherwise display a message “input not recognized”, for example. As will be explained in another embodiment of the present application, a sound indicator can also be displayed to a user indicating whether or not the volume of audible input is within an acceptable range, louder than an acceptable range, or softer than an acceptable range, for example.

If at least one address information choice (such as a city for example) was determinable in step S6, the process proceeds to step S8 wherein at least one determined choice is audibly output relating to address information of a travel destination. For example, instead of the system merely guessing that an audible input was received correctly, the processor 210 instead directs audible output of at least one determined choice relating to address information of a travel destination in step S8. Thereafter, in step S10, the processor 210 waits to see if an affirmative audible output was received in step S10. If so, the processor 210 and ASR module can then acknowledge that a correct determination occurred, and can thus acknowledge selection of the audibly output at least one determined choice upon receiving and recognizing an affirmative audible input, such as a “yes” for example.

Accordingly, instead of the processor 210 and ASR module merely guessing that an audible input was correct, at least one determined choice relating to address information is first audibly output, and selection of the at least one determined choice is not acknowledged until an affirmative audible input is received.

As stated in step S6, upon receipt of an audible input, at least one address information choice for the travel destination is determined, such as a city name for example. In at least one example embodiment of the present application, however, a plurality of “N-best” choices (not just one choice, noting that N can be any number, such as six for example) are recognized by the processor 210. Essentially, the processor 210, in conjunction with the ASR module, tries to best determine, from the phonemes of the audible input, a name of a city (in this first instance of input of address information for example). The processor 210 scans or reviews all the various cities stored in memory 230 for a match. The processor 210 then ranks the best possible matches such that the best possible match will be audibly output to the user of the navigation device 200 as the at least one determined choice relating to address information of the travel destination.

Accordingly, selection of the audibly output at least one determined choice can be acknowledged upon affirmative audible input in step S10. However, as “N-best” cities may be initially determined, the processor 210 can also direct the navigation device 200 to display an “N-best” list of choices, such as the N-best matches of city names determined by the processor 210 for example, on the integrated input and display device 290. The best possible match based upon the audible input received from the user may be audibly output and may further be displayed visually at the top of the “N-best” list (as the number one choice in a displayed list). Thereafter, next best choices can be visually displayed to the user in step S14 as numbered choices, such as choices two-six for example. Thereafter, a visually output choice may be selected in step S16, via display and subsequent input through the integrated input and display device 290, for example. If selected, selection can be acknowledged in step S20 of FIG. 5, by processor 210 for example.

Accordingly, the processor 210 and ASR module may not only be used to determine one single choice, but can be used to determine a plurality of choices relating to the address information of the travel destination. Each of the plurality of choices may be visually output and only one choice may be audibly output, for example. The plurality of choices may be visually output for selection on the integrated input and display device 290 of the navigation device 200. Each of these choices, such as a list of cities sounding most like the audible input for example, can be determined and displayed and are selectable by at least one of a touch panel and audible output. Further, the audibly output at least one choice is further selectable via receipt of an indication of touch panel input. In addition, each of the plurality of determined choices may be selectable via receipt of an indication of a touch panel input, and/or by audible input of a number corresponding to a displayed choice (for example, a user saying “two” to select the second displayed choice).

As one non-limiting example, if the city “Salt Lake City” is audibly output by a user of the navigation device 200, the processor 210 and ASR module can determine an “N-best” list of cities to be audibly and visually output. The first city in the displayed list may be “Salt Lake City”, and may be both audibly output and visually output on an integrated input and display device 290 of the navigation device for example. Further, another “N-best” cities can be determined by processor 210 and the ASR module, including for example, five other cities such as Salem, Sacramento, San Antonio, Springfield, and Staunton. In one example embodiment of the present application, the “N-best” list includes a set number of choices, such as six choices for example. These six choices (the number one choice and the five other N-best cities) can then be displayed to the user for audible or touch panel input/selection. Accordingly, if all six choices are displayed in order on the touch panel of the integrated input and display device 290, the user may merely touch and thereby select one of the six choices. Alternatively, as the first choice “Salt Lake City” is audibly output to the user, the user can acknowledge selection of the audibly output choice by issuing an affirmative audible input. Alternatively, the user can select any one of the other five displayed choices (or even the first choice for example) by merely stating the number corresponding to the particular choice, such as “6” representing the sixth choice of “Staunton.”

By utilizing an affirmative audible input, and/or an audible input of only one of six numerical values, the processor 210 increases the likelihood of confirming a user's selection and thereby can adequately acknowledge selection of a particular choice by the user.

Thereafter, once a user selects a city name and such selection is acknowledged in steps S12 or S20, a user can issue another audible output, for input/receipt by the processor 210 and ASR module, corresponding to a street name for example. Thereafter, the processor 210 and ASR module can determine at least one street name subsequent to selection of city name and subsequent to receiving another audible output. Again, the processor 210 and ASR module may determine an “N-best” list of street names, for subsequent audible and/or visual output to the user of the navigation device 200, for subsequent selection thereof. Selection can be done in the same manner as discussed previously with regard to city names.

Finally, a user can audibly output a number corresponding to the last element of a travel destination address, for input/receipt by the processor 210 and ASR module, which can be recognized and which can be used to determine an “N-best” list in the same manner as previously stated with regard to the city and street names. Alternatively, the user may merely enter the numerical element (number) of the address of the travel destination. As such, an entire address of a travel destination can be input and can thereafter be used by the processor 210, to determine a travel route (in conjunction with a GPS signal indicating current location of the navigation device 200 and stored map information in memory 230, for example).

It should be noted that the process of FIG. 5 can begin with audible input and recognition of a country and/or state for example, instead of a city name. Further, upon determining a plurality or “N-best” list of countries, states, cities, streets, etc., each of the plurality of countries, states, cities, or street names may be visually output and only one audibly output for subsequent selection thereof, either by touch panel input or audible input in a manner similar to that previously described.

As previously discussed, FIG. 6A provides an illustration of a non-limiting example of a selectable icon for enablement of an audible recognition mode. It should be noted that upon enablement of this audible recognition mode, the icon display may be varied to indicate to the user that the audible recognition mode has been enabled and that the system is merely awaiting receipt of the audible input as indicated in step 4 of FIG. 5 for example. The display may include varying the icon displayed in some way, such as changing color of the virtual button shown in FIG. 6A for example or otherwise changing appearance of this virtual button/icon. This is shown in FIG. 6B, noting that the button may be a different color, such as green in color, when waiting for an audible input.

Thereafter, the virtual button/icon may be altered again while the system is determining address information choices for a travel destination in step S6 for example, in a manner such as that shown in FIG. 6C for example. Finally, upon audibly outputting at least one determined choice in step S8 of FIG. 5, the icon may again be altered as shown in FIG. 6D for example. This can provide feedback to the user regarding the use of the audible recognition mode.

It should be noted that the determining of at least one choice relating to address information of the travel destination based upon a received audible input in step S6 of FIG. 5 can relate to input of a country/state/city/street address of travel destination in a normal fashion for example, and/or can relate to determination of a travel destination based upon a recent destination, a Point of Interest, a favorite, etc., as shown in FIG. 7 for example. Accordingly, upon receiving an indication of enablement of an audible recognition mode in step S2, a message such as “where would you like to go” can be displayed to the user on the integrated input and display device 290 of the navigation device 200 for example. Thereafter, the initial audible input received in step S4 could be that of a word relating to a category of information, such as “home” 710, “favorite” 720, “address” 730, “recent destination” 740 or “Point of Interest (POI)” 750. The processor 210 and ASR module can be programmed to recognize one of the aforementioned categories 710, 720, 730, 740, or 750, such that the determined at least one choice relating to address information of a travel destination may include traditional information such as cities, state, street names, etc., or may include other types of information such as Points of Interest, favorites, etc. Again, each of these processes may determine an output of choices relating to address information of a travel destination, noting that a most likely choice may be audibly output and selection thereof acknowledged by affirmative audible input (or touch panel input), with other “N-best” choices being visually output with selection thereof being acknowledged by at least one of audible and visual input.

In one example embodiment, the recognition may work as follows: For example, the recognition process for geographical names (cities, streets and crossings) may work according to the following rules:

  • 1. The process may be initiated by the user (choosing the voice recognition address entry for example).
  • 2. The processor 210/ASR module may then enter a listening mode and may indicate this with a special icon display, for example. The color of the icon may change if the level of the input is within an acceptable range, too low (no input), too loud or if the input has not been recognized properly (bad input). This may serve as a feedback to the user.
  • 3. If the recognition input was considered acceptable by the processor 210/ASR module, it may then try to match the accepted phoneme sequence against the known sequences for the chosen grammar. Here, it is possible to combine the precompiled grammar (the list of names known already) with the dynamic part of the grammar (the names added by the user). This part might be emphasized as it is related to MapShare technology.
  • 4. The processor 210/ASR module then may present the results to the user, via display on the integrated input and display device 290 in the form of N-best list. If the current voice is a TTS voice, for example, the best entry may be output (first in the list) to the user.
  • 5. The user then may have the possibility to accept or to reject the result. In the first case, the processor 210/ASR module proceeds to the next step, which is either the recognition of the next address level (city→street, street→crossing or street→house number) or the planning of the route. In the second case, the user has the possibility to pronounce the line number corresponding to the correct entry, if the entry is present in the list, or to go back to the previous step by saying “Back”, for example.

It should be noted that each of the aforementioned aspects of an embodiment of the present application have been described with regard to the method of the present application. However, at least one embodiment of the present application is directed to a navigation device 200, including a processor 210 to receive an indication of enablement of an audible recognition mode in a navigation device 200 and to determine, subsequent to receiving an audible input, at least one choice relating to address information of a travel destination based upon the received audible input; and an output device 241 to audibly output at least one determined choice relating to address information of a travel destination, the processor 210 being further useable to acknowledge selection of the audibly output at least one choice upon receiving an affirmative audible input. Such a navigation device 200 may further include an integrated input and display device 290 as the output device 241 enable display of icons and/or selections, and subsequent selection thereof, and/or can further include an audible output device such as a speaker, for example. Further, an input device 220 can include a microphone. Thus, such a navigation device 200 may be used to perform the various aspects of the method described with regard to FIGS. 5-7, as would be understood by one of ordinary skill in the art. Thus, further explanation is omitted for the sake of brevity.

In at least one other embodiment of the present application, a method includes receiving an indication of enablement of an audible recognition mode in a navigation device 200; and displaying on an integrated input and display device 290, subsequent to receiving an indication of enablement of the audible recognition mode, an indication as to whether a volume of a received audible input is within an acceptable range, louder than the acceptable range, and softer than the acceptable range.

In at least one other embodiment of the present application, a navigation device 200 includes a processor 210 to receive an indication of enablement of an audible recognition mode in a navigation device 200; and an integrated input and display device 290 to display, subsequent to the processor 210 receiving an indication of enablement of the audible recognition mode, an indication as to whether a volume of a received audible input is within an acceptable range, louder than the acceptable range, and softer than the acceptable range.

As previously indicated, an embodiment of the present application can be used to indicate to a user whether or not audible input, such as that of step S4 of FIG. 5 for example, is within an acceptable range. As shown in FIG. 8, it is initially determined by processor 210, for example in conjunction with the ASR module, whether or not an audible recognition mode was enabled in step S20. If so, three different displays can be displayed in steps S24, S28, and S32, depending on whether or not volume of the audible input is determined to be within an acceptable range. For example, upon receipt of the audible input, the processor 210 and ASR module can attempt to ascertain the input information. The processor 210 and ASR module have a better chance of determining a correct input if the volume is within an acceptable range.

Thus, after the audible recognition mode is enabled and after audible input information is received, it is determined in step S22 whether or not volume of the audible input is within an acceptable range. This can be done by the processor 210 comparing the volume of the received information with an acceptable range, stored in memory for example with a threshold upper limit and threshold lower limit. If the volume of the received audible input is within the upper and lower thresholds in Step S22, the processor then determines that the volume of the audible input is within an acceptable range. In response thereto, the process moves to step S24 wherein the processor 210 directs display of an indication that the volume is within an acceptable range. For example, this display may include changing the color of the “talk to me” icon shown in FIG. 6A to an icon such as that shown in FIG. 6B, in a green color indicative of acceptance for example. Alternatively, another indicator may be displayed, again noting that the indicator may be displayed in a color indicative of acceptance, such as a green color for example.

If it is determined that the volume is not within an acceptable range in step S22, the processor 210 then moves to either step S26 or step S30 to determine if the volume was louder than an acceptable range or softer than an acceptable range. It should be noted that the order of the steps of S26 and S30 is not important; as such determinations can be made in any order. If it is determined that the volume is louder than an acceptable range in Step S26, namely greater than upper threshold of the acceptable range, an indication may be displayed in Step S28, indicating that the volume is louder than an acceptable range. For example, the icon of FIG. 6B may be displayed in red, for example (a color indicative of incorrectness and something being too high), indicating that the audible input was too loud, and/or a red indicator may be displayed to the user, again indicating that the volume is too loud.

Thereafter, or before Step S26, the processor 210 moves to step S30 wherein it determines whether or not the volume is softer than an acceptable range. If so, an indication may be displayed in step S32, indicating that the volume is softer than an acceptable range. For example, this may involve displaying the icon of FIG. 6B in a yellow color, for example, indicating to the user that the audible input is not loud enough. Alternatively, a yellow indicator may be displayed on the integrated information and display device 290 for example.

It should be noted that the use of the colors green, red, and yellow are merely examples and other colors can be utilized. Further, other methods of displaying indications of a volume being within an acceptable range, louder than an acceptable range, or softer than an acceptable range, may also be used, including but not limited to displaying of words indicating that a user should speak softer, louder, etc. Accordingly, as shown in the example embodiment of FIG. 8, a method of the present application can include receiving an indication of enablement of an audible recognition mode in a navigation device 200 and displaying, on an integrated input and display device 290 and subsequent to receiving an indication of enablement of the audible recognition mode, an indication as to whether a volume of received audible input is within an acceptable range, louder than the acceptable range, and softer than the acceptable range. The display can include a display of color information to display the indications for example, wherein a yellow color may be used to indicate that the received audible input is softer than the acceptable range, a red color may be used to indicate that the received audible input is louder than the acceptable range, and a green color may be used to indicate that the received audible input is within an acceptable range. 100931 Address information regarding a travel destination of a user may be received in conjunction with the process shown in FIG. 8 for example, wherein the display may then indicate if the received information is within an acceptable range. Thus, the address information can include at least one of a city and street name information. Further, upon the address information being received within an acceptable range, the process may include at least one of recognizing the address information, displaying an indication of no recognition and displaying, on the integrated input and display device 290, a list of choices to the user for selection. Thus, the processes as shown in FIGS. 5 and 8 can be integrated.

It should be noted that each of the aforementioned aspects of an embodiment of the present application have been described with regard to the method of the present application. However, at least one embodiment of the present application is directed to a navigation device 200, including a processor 210 to receive an indication of enablement of an audible recognition mode in a navigation device 200; and an integrated input and display device 290 to display, subsequent to the processor 210 receiving an indication of enablement of the audible recognition mode, an indication as to whether a volume of a received audible input is within an acceptable range, louder than the acceptable range, and softer than the acceptable range. Such a navigation device 200 may further include an audible output device such as a speaker, for example. Further, an input device 220 can include a microphone. Thus, such a navigation device 200 may be used to perform the various aspects of the method described with regard to FIGS. 5-8, as would be understood by one of ordinary skill in the art. Thus, further explanation is omitted for the sake of brevity.

Finally, FIG. 9 is directed to another embodiment of the present application. Typically, when entering the address information into a navigation device 200, a navigation device 200 is not being used in a navigation mode. Thus, although the process set forth in FIG. 5 can be used with the navigation device 200 in a navigation mode, this is typically not the case as the vehicle, in which the navigation device 200 is located, for example, is usually stationary upon a user inputting a travel destination from which a route of travel can be determined.

In at least one other embodiment of the present application, a method includes receiving an indication of enablement of an audible recognition mode in a navigation device 200; receiving additional information from a source other than a user of the navigation device 200; formulating a question, answerable by a yes or no answer from the user, based upon the received additional information; and outputting the formulated question to the user.

In at least one other embodiment of the present application, a navigation device 200 includes a processor 210 to receive an indication of enablement of an audible recognition mode, to receive additional information from a source other than a user of the navigation device 200, and to formulate a question, answerable by a yes or no answer from the user, based upon the received additional information; and an output device 241 to output the formulated question to the use.

FIG. 9 of the present application includes a process involving enablement of an audible recognition mode, which is more likely to be usable in a navigation device 200 on a vehicle in which the navigation device is located, is in moving; e.g. where the navigation device 200 is operated in a navigation mode.

In the process shown in FIG. 9, in step S50, it is initially determined whether or not an audible recognition mode is enabled. This can be done, for example, in a manner similar to that previously described, including recognition of selection of the icon shown in FIG. 6A for example. Once this audible recognition mode is enabled, the processor 210 of the navigation device 200 may not only monitor receipt of audible information from a user, but can also monitor receipt of additional information from a source other than a user of the navigation device 200. Thus, in step S52, it is determined by the processor 210 whether or not additional information from a source other than a user is received. This information can include but is not limited to receipt of an incoming call or message (such as a telephone call or SMS message received by the navigation device 200 itself and/or with a paired mobile phone, for example; received traffic information; etc.) If not, the process merely cycles back and continues to monitor for such information.

However, if additional information from a source other than the user of the navigation device 200 is received in step S52, the process moves to step S54 wherein a question is formulated by the processor 210, answerable by a yes or no answer from the user, based upon the received additional information. For example, the processor 210 can monitor other systems in the navigation device 200 (including paired mobile phones, for example) to determine whether or not, for example, an SMS message is received. If so, the processor 210 may work with both the ASR module and/or, more likely, a TTS module (Text To Speech) to formulate a question answerable by a yes/no answer from the user such as, for example, “A new message was received; shall I read it aloud?” Thereafter, the formulated question may be output in step S56, noting that the output is preferably an audible output (but may also be accompanied by a visual output for example). Somewhat similarly, when the navigation device 200 may determine receipt of a traffic update indicating a traffic delay along the route of a particular period of time (calculatable by the processor 210 in a known manner for example), wherein the processor 210 and TTS module can then instruct the output of, for example, “Traffic delay on your route now ‘x’ minutes. Do you want to replan the route to minimize delays?”

The ASR module is typically utilized to recognize speech information from different users. Such information is typically unpredictable, and therefore cannot normally be stored in memory 230. The ASR module or engine operates in conjunction with the processor 210 to convert received speech information to a sequence of phonemes in a dynamic manner, and works with processor 210 to match existing grammar of stored cities, street names, etc., to the converted sequence of phonemes as described above. As such, the ASR module dynamically causes the processor 210 to utilize large chunks of memory 230.

To the contrary, when the processor 210 works in conjunction with a TTS module, the TTS module forms questions which can be predefined or prerecorded in memory 230 for example. The TTS module can output any kind of audio information, provided that it is in the language to which the voice corresponds. Some parts of the phrases that are considered to be used most often can be prerecorded, stored, and later used by the TTS module as well, to improve the quality of the output. Thus, while the TI′S module can be used to convert simple SMS messages to voice output for the user, the TTS module typically works best in conjunction with processor 210 for outputting of preformulated questions, slightly modifiable if necessary, upon a processor 210 determining that additional information such as an SMS message, traffic update, etc., has been received by the navigation device 200. Such information can include traffic information, an incoming telephone call, an incoming SMS message, etc.

Further, the formulating of the question can include inserting information, based upon the received information, into a stored question, such as inserting a traffic delay into the aforementioned traffic delay question for example. Thus, the formulating can include inserting information regarding a calculated traffic delay, based upon a received traffic information, into a stored question. Thereafter, in step S56, the formulated question can be output, noting that the output may include at least one of an audible and visual output.

The formulated question output in step S56 is typically formulated to receive a yes or no answer from the user, to thereby enable the processor 210 to operate in conjunction with the ASR module during driving conditions when the navigation device 200 is operating in a navigation mode. In such a mode, the navigation device 200 is utilizing a lot of existing memory 230 and it is preferable to have the ASR module not utilize so much of memory 230. By utilizing yes/no questions, the processor 210 and ASR module can easily recognize the short yes or no answer of the user. Thereafter, a subsequent action may be performed by the navigation device 200 upon receipt of a yes answer from the user, such as calculating a new route of travel based upon receipt of a yes answer from the user regarding a calculated traffic delay for example. Alternatively, upon the additional information being an SMS message, the SMS message can be converted by utilizing the TTS module for example, and an incoming text message can be output to the user upon receipt of a yes answer from the user.

It should be noted that each of the aforementioned aspects of an embodiment of the present application have been described with regard to the method of the present application. However, at least one embodiment of the present application is directed to a navigation device 200, including a processor 210 to receive an indication of enablement of an audible recognition mode, to receive additional information from a source other than a user of the navigation device 200, and to formulate a question, answerable by a yes or no answer from the user, based upon the received additional information; and an output device 241 to output the formulated question to the use. Such a navigation device 200 may further include an integrated input and display device 290 as the output device 241 enable display of icons and/or selections, and subsequent selection thereof, and/or can further include an audible output device such as a speaker, for example. Further, an input device 220 can include a microphone. Thus, such a navigation device 200 may be used to perform the various aspects of the method described with regard to FIG. 9, as would be understood by one of ordinary skill in the art. Thus, further explanation is omitted for the sake of brevity.

The methods of at least one embodiment expressed above may be implemented as a computer data signal embodied in the carrier wave or propagated signal that represents a sequence of instructions which, when executed by a processor (such as processor 304 of server 302, and/or processor 210 of navigation device 200 for example) causes the processor to perform a respective method. In at least one other embodiment, at least one method provided above may be implemented above as a set of instructions contained on a computer readable or computer accessible medium, such as one of the memory devices previously described, for example, to perform the respective method when executed by a processor or other computer device. In varying embodiments, the medium may be a magnetic medium, electronic medium, optical medium, etc.

Even further, any of the aforementioned methods may be embodied in the form of a program. The program may be stored on a computer readable media and is adapted to perform any one of the aforementioned methods when run on a computer device (a device including a processor). Thus, the storage medium or computer readable medium is adapted to store information and is adapted to interact with a data processing facility or computer device to perform the method of any of the above mentioned embodiments.

The storage medium may be a built-in medium installed inside a computer device main body or a removable medium arranged so that it can be separated from the computer device main body. Examples of the built-in medium include, but are not limited to, rewriteable non-volatile memories, such as ROMs and flash memories, and hard disks. Examples of the removable medium include, but are not limited to, optical storage media such as CD-ROMs and DVDs; magneto-optical storage media, such as MOs; magnetism storage media, including but not limited to floppy disks (trademark), cassette tapes, and removable hard disks; media with a built-in rewriteable non-volatile memory, including but not limited to memory cards; and media with a built-in ROM, including but not limited to ROM cassettes; etc. Furthermore, various information regarding stored images, for example, property information, may be stored in any other form, or it may be provided in other ways.

As one of ordinary skill in the art will understand upon reading the disclosure, the electronic components of the navigation device 200 and/or the components of the server 302 can be embodied as computer hardware circuitry or as a computer readable program, or as a combination of both.

The system and method of embodiments of the present application include software operative on the processor to perform at least one of the methods according to the teachings of the present application. One of ordinary skill in the art will understand, upon reading and comprehending this disclosure, the manner in which a software program can be launched from a computer readable medium in a computer based system to execute the functions found in the software program. One of ordinary skill in the art will further understand the various programming languages which may be employed to create a software program designed to implement and perform at least one of the methods of the present application.

The programs can be structured in an object-orientation using an object-oriented language including but not limited to JAVA, Smalltalk, C++, etc., and the programs can be structured in a procedural-orientation using a procedural language including but not limited to COBAL, C, etc. The software components can communicate in any number of ways that are well known to those of ordinary skill in the art, including but not limited to by application of program interfaces (API), interprocess communication techniques, including but not limited to report procedure call (RPC), common object request broker architecture (CORBA), Component Object Model (COM), Distributed Component Object Model (DCOM), Distributed System Object Model (DSOM), and Remote Method Invocation (RMI). However, as will be appreciated by one of ordinary skill in the art upon reading the present application disclosure, the teachings of the present application are not limited to a particular programming language or environment.

The above systems, devices, and methods have been described by way of example and not by way of limitation with respect to improving accuracy, processor speed, and ease of user interaction, etc. with a navigation device 200.

Further, elements and/or features of different example embodiments may be combined with each other and/or substituted for each other within the scope of this disclosure and appended claims.

Still further, any one of the above-described and other example features of the present invention may be embodied in the form of an apparatus, method, system, computer program and computer program product. For example, of the aforementioned methods may be embodied in the form of a system or device, including, but not limited to, any of the structure for performing the methodology illustrated in the drawings.

Example embodiments being thus described, it will be obvious that the same may be varied in many ways. Such variations are not to be regarded as a departure from the spirit and scope of the present invention, and all such modifications as would be obvious to one skilled in the art are intended to be included within the scope of the following claims.

Claims

1. A method, comprising:

receiving an indication of enablement of an audible recognition mode in a navigation device;
determining, subsequent to receiving an indication of enablement of the audible recognition mode and subsequent to receiving an audible input, at least one choice relating to address information of a travel destination based upon the received audible input;
audibly outputting at least one determined choice relating to address information of a travel destination; and
acknowledging selection of the audibly output at least one determined choice upon receiving an affirmative audible input.

2. The method of claim 1, wherein, upon the determining including determining a plurality of choices relating to the address information of the travel destination, each of the plurality of choices is visually output and only one choice is audibly output.

3. The method of claim 2, wherein the plurality of choices are visually output for selection on an integrated input and display device of the navigation device.

4. The method of claim 3, wherein each of the plurality of choices are selectable by at least one of touch panel input and audible input, the audibly output at least one choice being further selectable via receipt of an indication of a touch panel input.

5. The method of claim 4, wherein each of the plurality of choices are selectable by audible input of a number corresponding to a displayed choice.

6. The method of claim 1, wherein the at least one choice relating to address information of a travel destination includes a city name.

7. The method of claim 6, wherein, subsequent to selection of a city name and subsequent to receiving another audible input, determining at least one street name.

8. The method of claim 7, wherein, upon the determining including determining a plurality of street names, each of the plurality of street names is visually output and only one street name is audibly output.

9. A method, comprising:

receiving an indication of enablement of an audible recognition mode in a navigation device; and
displaying on an integrated input and display device, subsequent to receiving an indication of enablement of the audible recognition mode, an indication as to whether a volume of a received audible input is within an acceptable range, louder than the acceptable range, and softer than the acceptable range.

10. The method of claim 9, wherein the display includes a display of color information to display the indications.

11. The method of claim 10, wherein a yellow color is used to indicate that the received audible input is softer than the acceptable range, wherein a red color is used to indicate that the received audible input is louder than the acceptable range, and wherein a green color is used to indicate that the received audible input is within an acceptable range.

12. The method of claim 11, wherein subsequent to enablement of the audible recognition mode, address information regarding a travel destination of the user is received, the displaying then indicating if the received information is within an acceptable range.

13. The method of claim 12, whether the address information includes at least one of city and street name information.

14. The method of claim 12, further comprising, upon address information being received within an acceptable range, at least one of recognizing the address information, displaying an indication of no recognition and displaying, on the integrated input and display device, a list of choices to the user for selection.

15. A method, comprising:

receiving an indication of enablement of an audible recognition mode in a navigation device;
receiving additional information from a source other than a user of the navigation device;
formulating a question, answerable by a yes or no answer from the user, based upon the received additional information; and
outputting the formulated question to the user.

16. The method of claim 15, wherein the information includes traffic information.

17. The method of claim 15, wherein the information includes receipt of at least one of an incoming call and message.

18. The method of claim 15, wherein the formulating includes inserting information, based upon the received information, into a stored question.

19. The method of claim 15, wherein the formulating includes inserting information regarding a calculated traffic delay, based upon the received traffic information, into a stored question.

20. The method of claim 15, wherein the output includes at least one of an audible and visual output.

21. The method of claim 15, wherein the output includes an audible and a visual output.

22. The method of claim 15, further comprising performing a subsequent action upon receipt of a yes answer from the user.

23. The method of claim 15, further comprising calculating a new route of travel upon receipt of a yes answer from the user regarding the calculated traffic delay.

24. The method of claim 17, further comprising outputting an incoming text message upon receipt of a yes answer from the user.

25. A navigation device, comprising:

a processor to receive an indication of enablement of an audible recognition mode in a navigation device and to determine, subsequent to receiving an audible input, at least one choice relating to address information of a travel destination based upon the received audible input; and
an output device to audibly output at least one determined choice relating to address information of a travel destination, the processor being further useable to acknowledge selection of the audibly output at least one determined choice upon receiving an affirmative audible input.

26. The navigation device of claim 25, further comprising:

an integrated input and display device to, upon the determining by the processor including determining a plurality of choices relating to the address information of the travel destination, display each of the plurality of choices;
an audible output device to audibly output only one choice relating to the address information of the travel destination.

27. The navigation device of claim 26, wherein the plurality of choices are output by the integrated input and display device for selection.

28. The navigation device of claim 27, wherein each of the plurality of choices are selectable by at least one of touch panel input via the integrated input and display device, and audible input, the audibly output at least one choice being further selectable via receipt by the processor of an indication of a touch panel input.

29. The navigation device of claim 28, wherein each of the plurality of choices are selectable via receipt by the processor of an audible input of a number corresponding to a displayed choice.

30. The navigation device of claim 25, wherein the at least one choice relating to address information of a travel destination includes a city name.

31. The navigation device of claim 30, wherein, subsequent to selection of a city name and subsequent to receiving another audible input, the processor is further useable to determine at least one street name.

32. The navigation device of claim 31, wherein, upon the determining by the processor including determining a plurality of street names, each of the plurality of street names is visually output and only one street name is audibly output.

33. The navigation device of claim 25, wherein the navigation device is portable.

34. A navigation device, comprising:

a processor to receive an indication of enablement of an audible recognition mode in a navigation device; and
an integrated input and display device to display, subsequent to the processor receiving an indication of enablement of the audible recognition mode, an indication as to whether a volume of a received audible input is within an acceptable range, louder than the acceptable range, and softer than the acceptable range.

35. The navigation device of claim 34, wherein the display includes a display of color information to display the indications.

36. The navigation device of claim 35, wherein a yellow color is used to indicate that the received audible input is softer than the acceptable range, wherein a red color is used to indicate that the received audible input is louder than the acceptable range, and wherein a green color is used to indicate that the received audible input is within the acceptable range.

37. The navigation device of claim 36, wherein subsequent to enablement of the audible recognition mode, address information regarding a travel destination of the user is received, the displaying then indicating if the received information is within an acceptable range.

38. The navigation device of claim 37, whether the address information includes at least one of city and street name information.

39. The navigation device of claim 37, wherein, upon address information being received within an acceptable range, the processor at least one of recognizing the address information, directing display on the integrated input and display device, of an indication of no recognition and directing display on the integrated input and display device, of a list of choices to the user for selection.

40. The navigation device of claim 34, wherein the navigation device is portable.

41. A navigation device, comprising:

a processor to receive an indication of enablement of an audible recognition mode, to receive additional information from a source other than a user of the navigation device, and to formulate a question, answerable by a yes or no answer from the user, based upon the received additional information; and
an output device to output the formulated question to the user.

42. The navigation device of claim 41, wherein the information includes traffic information.

43. The navigation device of claim 41, wherein the information includes receipt of at least one of an incoming call and message.

44. The navigation device of claim 41, wherein the formulating includes inserting information, based upon the received information, into a stored question.

45. The navigation device of claim 41, wherein the formulating includes inserting information regarding a calculated traffic delay, based upon the received traffic information, into a stored question.

46. The navigation device of claim 41, wherein the output device is at least one of an audible and visual output device.

47. The navigation device of claim 41, wherein the output device includes an audible output device and a visual output device.

48. The navigation device of claim 41, wherein the processor is useable to perform a subsequent action upon receipt of a yes answer from the user.

49. The navigation device of claim 41, wherein the processor is useable to calculate a new route of travel upon receipt of a yes answer from the user regarding the calculated traffic delay.

50. The navigation device of claim 43, wherein the processor is useable to direct output of an incoming text message upon receipt of a yes answer from the user.

51. The navigation device of claim 41, wherein the navigation device is portable.

Patent History
Publication number: 20100286901
Type: Application
Filed: Oct 10, 2007
Publication Date: Nov 11, 2010
Inventors: Pieter Geelen (Amsterdam), Mareji Roosen (Amsterdam)
Application Number: 11/907,232
Classifications
Current U.S. Class: 701/200; Recognition (704/231)
International Classification: G01C 21/36 (20060101); G10L 15/00 (20060101); G08G 1/09 (20060101);