HEARING ASSIST DEVICE WITH EXTERNAL OPERATIONAL SUPPORT

- BROADCOM CORPORATION

Hearing assist devices and devices and services that are capable of providing external operational support thereto are described. In accordance with various embodiments, the performance of one or more functions by a hearing assist device is assisted or improved in some manner by utilizing resources of an external device and/or service to which the hearing assist device may be communicatively connected. Such performance assistance or improvement may be achieved, for example and without limitation, by utilizing power resources, processing resources, storage resources, sensor resources, and/or user interface resources of an external device or service to which the hearing assist device may be communicatively connected.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Application No. 61/662,217, filed on Jun. 20, 2012, which is incorporated by reference herein in its entirety.

BACKGROUND

1. Technical Field

The subject matter described herein relates to hearing assist devices and devices and services that are capable of providing external operational support to such hearing assist devices.

2. Description of Related Art

Persons may become hearing impaired for a variety of reasons, including aging and being exposed to excessive noise, which can both damage hair cells in the inner ear. A hearing aid is an electro-acoustic device that typically fits in or behind the ear of a wearer, and amplifies and modulates sound for the wearer. Hearing aids are frequently worn by persons who are hearing impaired to improve their ability to hear sounds. A hearing aid may be worn in one or both ears of a user, depending on whether one or both of the user's ears need hearing assistance.

Less expensive hearing aids amplify all frequencies equally, while mid-range analog and digital hearing aids can be programmed to amplify in a manner tuned to a hearing impaired wearer's actual frequency response. Most expensive models adapt via operating modes. In some modes, a directional microphone is used, while an omnidirectional microphone is used in others.

Since most hearing aids rely on battery power to operate, it is critical that hearing aids are designed so as not consume battery power too quickly. This places a constraint on the types of features and processes that can be built into a hearing aid. Furthermore, it is desirable that hearing aids be lightweight and small so that they are comfortable to wear and not readily discernible to others. This also operates as a constraint on both the size of the batteries that can be used to power the hearing aid as well as the types of functionality that can be integrated into a hearing aid.

If the hearing aid batteries are dead or a hearing aid is left at home, a wearer needing hearing aid support is at a loss. This often results in someone raising their speaking volume to help the wearer hear what they are saying. Unfortunately, because hearing problems often have a frequency profile, merely raising one's volume may not work. Similarly, raising the volume on a cell phone may not adequately provide understandable audio to someone with hearing impairment.

BRIEF DESCRIPTION OF THE DRAWINGS/FIGURES

The accompanying drawings, which are incorporated herein and form part of the specification, illustrate the subject matter of the present application and, together with the description, further serve to explain the principles of the embodiment described herein and to enable a person skilled in the relevant art(s) to make and use such embodiments.

FIG. 1 shows a communication system that includes a multi-sensor hearing assist device that communicates with a near field communication (NFC)-enabled communications device, according to an exemplary embodiment.

FIGS. 2-4 show various configurations for associating a multi-sensor hearing assist device with an ear of a user, according to exemplary embodiments.

FIG. 5 shows a multi-sensor hearing assist device that mounts over an ear of a user, according to an exemplary embodiment.

FIG. 6 shows a multi-sensor hearing assist device that extends at least partially into the ear canal of a user, according to an exemplary embodiment.

FIG. 7 shows a circuit block diagram of a multi-sensor hearing assist device that is configured to communicate with external devices according to multiple communication schemes, according to an exemplary embodiment.

FIG. 8 shows a flowchart of a process for a hearing assist device that processes and transmits sensor data and receives a command from a second device, according to an exemplary embodiment.

FIG. 9 shows a communication system that includes a multi-sensor hearing assist device that communicates with one or more communications devices and network-connected devices, according to an exemplary embodiment.

FIG. 10 shows a flowchart of a process for a wirelessly charging a battery of a hearing assist device, according to an exemplary embodiment.

FIG. 11 shows a flowchart of a process for broadcasting sound that is generated based on sensor data, according to an exemplary embodiment.

FIG. 12 shows a flowchart of a process for generating and broadcasting filtered sound from a hearing assist device, according to an exemplary embodiment.

FIG. 13 shows a flowchart of a process for generating an information signal in a hearing assist device based on a voice of a user, and transmitting the information signal to a second device, according to an exemplary embodiment.

FIG. 14 shows a flowchart of a process for generating voice based at least on sensor data to be broadcast by a speaker of a hearing assist device to a user, according to an exemplary embodiment.

FIG. 15 is a block diagram of an example system that enables external operational support to be provided to a hearing assist device in accordance with an embodiment.

FIG. 16 is a block diagram of a system comprising a hearing assist device and a cloud/service/phone/portable device that may provide external operational support thereto.

FIG. 17 is a block diagram of an enhanced audio processing module that may be implemented by a hearing assist device to provide such enhanced spatial signaling in accordance with an embodiment.

FIG. 18 depicts a flowchart of a method for providing audio playback support to a hearing assist device in accordance with an embodiment.

FIG. 19 is a block diagram of a noise suppression system that may be utilized by a hearing assist device or a device/service communicatively connected thereto in accordance with an embodiment.

FIGS. 20-23 depict flowcharts of methods for providing external operational support to a hearing assist device worn by a user in accordance with various embodiments.

FIG. 24 is a block diagram of an audio processing module that may be implemented in a hearing assist device in accordance with an embodiment.

The features and advantages of the subject matter of the present application will become more apparent from the detailed description set forth below when taken in conjunction with the drawings, in which like reference characters identify corresponding elements throughout. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the corresponding reference number.

DETAILED DESCRIPTION I. Introduction

The following detailed description discloses numerous example embodiments. The scope of the present patent application is not limited to the disclosed embodiments, but also encompasses combinations of the disclosed embodiments, as well as modifications to the disclosed embodiments.

References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.

Furthermore, it should be understood that spatial descriptions (e.g., “above,” “below,” “up,” “left,” “right,” “down,” “top,” “bottom,” “vertical,” “horizontal,” etc.) used herein are for purposes of illustration only, and that practical implementations of the structures described herein can be spatially arranged in any orientation or manner.

II. Example Hearing Assist Device Embodiments

Persons may become hearing impaired for a variety of reasons, including aging and being exposed to excessive noise, which can both damage hair cells in the inner ear. A hearing aid is an electro-acoustic device that typically fits in or behind the ear of a wearer, and amplifies and modulates sound for the wearer. Hearing aids are frequently worn by persons who are hearing impaired to improve their ability to hear sounds. A hearing aid may be worn in one or both ears of a user, depending on whether one or both of the user's ears need hearing assistance.

Opportunities exist with integrating further functionality into hearing assist devices that are worn in/on a human ear. Hearing assist devices, such as hearing aids, headsets, and headphones, are typically worn in contact with the user's ear, and in some cases extend into the user's ear canal. As such, a hearing assist device is typically positioned in close proximity to various organs and physical features of a wearer, such as the inner ear structure (for example, the ear canal, ear drum, ossicles, Eustachian tube, cochlea, auditory nerve, or the like), skin, brain, veins and arteries, and further physical features of the wearer. Because of this advantageous positioning, a hearing assist device may be configured to detect various characteristics of a user's health. Furthermore, the detected characteristics may be used to treat health-related issues of the wearer, and perform further health-related functions. As such, hearing assist devices may be used by users that do not even have hearing problems, but instead may be used by these users to detect other health problems.

For instance, in embodiments, health monitoring technology may be incorporated into a hearing assist device to monitor the health of a wearer. Examples of health monitoring technology that may be incorporated in a hearing assist device include health sensors that determine (for example, sense/detect/measure/collect, or the like) various physical characteristics of the user, such as blood pressure, heart rate, temperature, humidity, blood oxygen level, skin galvanometric levels, brain wave information, arrhythmia onset detection, skin chemistry changes, falling down impacts, long periods of activity, or the like.

Sensor information resulting from the monitoring may be analyzed within the hearing assist device, or may be transmitted from the hearing assist device and analyzed at a remote location. For instance, the sensor information may be analyzed at a local computer, in a smart phone or other mobile device, or at a remote location, such as at a cloud-based server. In response to the analysis of the sensor information, instructions and/or other information may be communicated back to the wearer. Such information may be provided to the wearer by a display screen (for example, a desktop computer display, a smart phone display, a tablet computer display, a medical equipment display, or the like), by the hearing assist device itself (for example, by voice, beeps, or the like), or may be provided to the wearer in another manner. Medical personnel and/or emergency response personnel (for example, reachable at the 911 phone number) may be alerted when particular problems with the wearer are detected by the hearing assist device. The medical personnel may evaluate information received from the hearing assist device, and provide information back to the hearing assist device/wearer. The hearing assist device may provide the wearer with reminders, alarms, instructions, etc.

The hearing assist device may be configured with speech/voice recognition capability. For instance, the wearer may provide commands, such as by voice, to the hearing assist device. The hearing assist device may be configured to perform various audio processing functions to suppress background noise and/or other sounds, as well amplifying other sounds, and may be configured to modify audio according to a particular frequency response of the hearing of the wearer. The hearing assist device may be configured to detect vibrations (for example, jaw movement of the wearer during talking), and may use the detected vibrations to aid in improving speech/voice recognition.

Hearing assist devices may be configured in various ways, according to embodiments. For instance, FIG. 1 shows a communication system 100 that includes a multi-sensor hearing assist device 102 that communicates with a near field communication (NFC)-enabled communications device 104, according to an exemplary embodiment. Hearing assist device 102 may be worn in association with the ear of a user, and may be configured to communicate with other devices, such as communications device 104. As shown in FIG. 1, hearing assist device 102 includes a plurality of sensors 106a and 106b, processing logic 108, an NFC transceiver 110, storage 112, and a rechargeable battery 114. These features of hearing assist device 102 are described as follows.

Sensors 106a and 106b are medical sensors that each sense a characteristic of the user and generate a corresponding sensor output signal. Although two sensors 106a and 106b are shown in hearing assist device 102 in FIG. 1, any number of sensors may be included in hearing assist device 102, including three sensors, four sensors, five sensors, etc. (e.g., tens of sensors, hundreds of sensors, etc.). Examples of sensors for sensors 106a and 106b include a blood pressure sensor, a heart rate sensor, a temperature sensor, a humidity sensor, a blood oxygen level sensor, a skin galvanometric level sensor, a brain wave information sensor, an arrhythmia onset detection sensor (for example, a chest strap with multiple sensor pads), a skin chemistry sensor, a motion sensor (e.g., to detect falling down impacts, long periods of activity, etc.), an air pressure sensor, etc. These and further types of sensors suitable for sensors 106a and 106b are further described elsewhere herein.

Processing logic 108 may be implemented in hardware (e.g., one or more processors, electrical circuits, etc.), or any combination of hardware with software and/or firmware. Processing logic 108 may receive sensor information from sensors 106a, 106b, etc., and may process the sensor information to generate processed sensor data. Processing logic 108 may execute one or more programs that define various operational characteristics, such as: (i) a sequence or order of retrieving sensor information from sensors of hearing assist device 102, (ii) sensor configurations and reconfigurations (via a preliminary setup or via adaptations over the course of time), (iii) routines by which particular sensor data is at least pre-processed, and (iv) one or more functions/actions to be performed based on particular sensor data values, etc.

For instance, processing logic 108 may store and/or access sensor data in storage 112, processed or unprocessed. Furthermore, processing logic 108 may access one or more programs stored in storage 112 for execution. Storage 112 may include one or more types of storage, including memory (e.g., random access memory (RAM), read only memory (ROM), etc.) that is volatile or non-volatile.

NFC transceiver 110 is configured to wirelessly communicate with a second device (for example, a local or remote supporting device), such as NFC-enabled communications device 104 according to NFC techniques. NFC uses magnetic induction between two loop antennas (e.g., coils, microstrip antennas, or the like) located within each other's near field, effectively forming an air-core transformer. As such, NFC communications occur over relatively short ranges (e.g., within a few centimeters), and are conducted at radio frequencies. For instance, in one example, NFC communications may be performed by NFC transceiver 110 at a 13.56 MHz frequency, with data transfers of up to 424 kilobits per second. In other embodiments, NFC transceiver 110 may be configured to perform NFC communications at other frequencies and data transfer rates. Examples of standards according to which NFC transceiver 110 may be configured to conduct NFC communications include ISO/IEC 18092 and those defined by the NFC Forum, which was founded in 2004 by Nokia, Philips and Sony.

NFC-enabled communications device 104 may be configured with an NFC transceiver to perform NFC communications. NFC-enabled communications device 104 may be any type of device that may be enabled with NFC capability, such as a docking station, a desktop computer (e.g., a personal computer, etc.), a mobile computing device (e.g., a personal digital assistant (PDA), a laptop computer, a notebook computer, a tablet computer (e.g., an Apple iPad™), a netbook, etc.), a mobile phone (e.g., a cell phone, a smart phone, etc.), a medical appliance, etc. Furthermore, NFC-enabled communications device 104 may be network-connected to enable hearing assist device 102 to communicate with entities over the network (e.g., cloud computers or servers, web services, etc.).

NFC transceiver 102 enables sensor data (processed or unprocessed) to be transmitted by processing logic 108 from hearing assist device 102 to NFC-enabled communications device 104. In this manner, the sensor data may be reported, processed, and/or analyzed externally to hearing assist device 102. Furthermore, NFC transceiver 102 enables processing logic 108 at hearing assist device 102 to receive data and/or instructions/commands from NFC-enabled communications device 104 in response to the transmitted sensor data. Furthermore, NFC transceiver 102 enables processing logic 108 at hearing assist device 102 to receive programs (e.g., program code), including new programs, program updates, applications, “apps”, and/or other programs from NFC-enabled communications device 104 that can be executed by processing logic 108 to change/update the functionality of hearing assist device 102.

Rechargeable battery 114 is a rechargeable battery that includes one or more electrochemical cells that store charge that may be used to power components of hearing assist device 102, including one or more of sensor 106a, 106b, etc., processing logic 108, NFC transceiver 110, and storage 112. Rechargeable battery 114 may be any suitable rechargeable battery type, including lead-acid, nickel cadmium (NiCd), nickel metal hydride (NiMH), lithium ion (Li-ion), and lithium ion polymer (Li-ion polymer). Charging of the batteries may be through a typical tethered recharger or via NFC power delivery.

Although NFC communications are shown, alternative communication approaches can be employed. Such alternatives may include wireless power transfer schemes as well.

Hearing assist device 102 may be configured in any manner to be associated with the ear of a user. For instance, FIGS. 2-4 show various configurations for associating a hearing assist device with an ear of a user, according to exemplary embodiments. In FIG. 2, hearing assist device 102 may be a hearing aid type that fits and is inserted partially or fully in an ear 202 of a user. As shown in FIG. 2, hearing assist device 102 includes sensors 106a-106n that contact the user. Examples forms of hearing assist device 102 of FIG. 2 include ear buds, “receiver in the canal” hearing aids, “in the ear” (ITE) hearing aids, “invisible in canal” (IIC) hearing aids, “completely in canal” (CIC) hearing aids, etc. Although not illustrated, cochlear implant configurations may also be used.

In FIG. 3, hearing assist device 102 may be a hearing aid type that mounts on top of, or behind ear 202 of the user. As shown in FIG. 3, hearing assist device 102 includes sensors 106a-106n that contact the user. Examples forms of hearing assist device 102 of FIG. 3 include “behind the ear” (BTE) hearing aids, “open fit” or “over the ear” (OTE) hearing aids, eyeglasses hearing aids (e.g., that contain hearing aid functionality in or on the glasses arms), etc.

In FIG. 4, hearing assist device 102 may be a headset or head phones that mounts on the head of the user and include speakers that are held close to the user's ears. As shown in FIG. 4, hearing assist device 102 includes sensors 106a-106n that contact the user. In the embodiment of FIG. 4, sensors 106a-106n may be spaced further apart in the headphones, including being dispersed in the ear pad(s) and/or along the headband that connects together the ear pads (when a head band is present).

It is noted that hearing assist device 102 may be configured in further forms, including combinations of the forms shown in FIGS. 2-4, and is not intended to be limited to the embodiments illustrated in FIGS. 2-4. For instance, hearing assist device 102 may be a cochlear implant-type hearing aid, or other type of hearing assist device. The following section describes some example forms of hearing assist device 102 with associated sensor configurations.

III. Example Hearing Assist Device Forms and Sensor Array Embodiments

As described above, hearing assist device 102 may be configured in various forms, and may include any number and type of sensors. For instance, FIG. 5 shows a hearing assist device 500 that is an example of hearing assist device 102 according to an exemplary embodiment. Hearing assist device 500 is configured to mount over an ear of a user, and has a portion that is at least partially inserted into the ear. A user may wear a single hearing assist device 500 on one ear, or may simultaneously wear first and second hearing assist devices 500 on the user's right and left ears, respectively.

As shown in FIG. 5, hearing assist device 500 includes a case or housing 502 that includes a first portion 504, a second portion 506, and a third portion 508. First portion 504 is shaped to be positioned behind/over the ear of a user. For instance, as shown in FIG. 5, first portion 504 has a crescent shape, and may optionally be molded in the shape of a user's outer ear (e.g., by taking an impression of the outer ear, etc.). Second portion 506 extends perpendicularly from a side of an end of first portion 504. Second portion 506 is shaped to be inserted at least partially into the ear canal of the user. Third portion 508 extends from second portion 506, and may be referred to as an earmold shaped to conform to the user's ear shape, to better adhere hearing assist device 500 to the user's ear.

As shown in FIG. 5, hearing assist device 500 further includes a speaker 512, a forward IR/UV (ultraviolet) communication transceiver 520, a BTLE (BLUETOOTH low energy) antenna 522, at least one microphone 524, a telecoil 526, a tethered sensor port 528, a skin communication conductor 534, a volume controller 540, and a communication and power delivery coil 542. Furthermore, hearing assist device 500 includes a plurality of medical sensors, including at least one pH sensor 510, an IR (infrared) or sonic distance sensor 514, an inner ear temperature sensor 516, a position/motion sensor 518, a WPT (wireless power transfer)/NFC coil 530, a switch 532, a glucose spectroscopy sensor 536, a heart rate sensor 538, and a subcutaneous sensor 544. In embodiments, hearing assist device 500 may include one or more of these further features and/alternative features. The features of hearing assist device 500 are described as follows.

As shown in FIG. 5, speaker 512, IR or sonic distance sensor 514, and inner ear temperature sensor 516 are located on a circular surface of second portion 506 of hearing assist device 500 that faces into the ear of the user. Position/motion sensor 518 and pH sensor 510 are located on a perimeter surface of second portion 506 around the circular surface that contacts the ear canal of the user. In alternative embodiments, one or more of these features may be located in/on different locations of hearing assist device 500.

pH sensor 510 is a sensor that may be present to measure a pH of skin of the user's inner ear. The measured pH value may be used to determine a medical problem of the user, such an onset of stroke. pH sensor 510 may include one or more metallic plates. Upon receiving power (e.g., from rechargeable battery 114 of FIG. 1), pH sensor 510 may generate a sensor output signal (e.g., an electrical signal) that indicates a measured pH value.

Speaker 512 (also referred to as a “loudspeaker”) is a speaker of hearing assist device 500 that broadcasts environmental sound received by microphone(s) 524, that is subsequently amplified and/or filtered by processing logic of the hearing assist device 600, into the ear of the user to assist the user in hearing the environmental sound. Furthermore, speaker 512 may broadcast additional sounds into the ear of the user for the user to hear, including alerts (e.g., tones, beeping sounds), voice, and/or further sounds that may be generated by or received by processing logic of hearing assist device 500, and/or may be stored in hearing assist device 500.

IR or sonic distance sensor 514 is a sensor that may be present to sense a displacement distance. Upon receiving power, IR or sonic distance sensor 514 may generate an IR light pulse, a sonic (e.g., ultrasonic) pulse, or other light or sound pulse, that may be reflected in the ear of the user, and the reflection may be received by IR or sonic distance sensor 514. A time of reflection may be compared for a series of pulses to determine a displacement distance within the ear of user. IR or sonic distance sensor 514 may generate a sensor output signal (e.g., an electrical signal) that indicates a measured displacement distance.

A distance and eardrum deflection that is determined using IR or sonic distance sensor 514 (e.g., by using a high rate sampling or continuous sampling) may be used to calculate an estimate of the “actual” or “true” decibel level of an audio signal being input to the ear of the user. By incorporating such functionality, hearing assist device 500 can perform the following when a user inserts and turns on hearing assist device 500: (i) automatically adjust the volume to fall within a target range; and (ii) prevent excess volume associated with unexpected loud sound events. It is noted that the amount of volume adjustment that may be applied can vary by frequency. It is also noted that the excess volume associated with unexpected loud sound events may be further prevented by using a hearing assist device that has a relatively tight fit, thereby allowing the hearing assist device to act as an ear plug.

Hearing efficiency and performance data over the spectrum of normal audible frequencies can be gathered by delivering each frequency (or frequency range) at an output volume level, measuring eardrum deflection characteristics, and delivering audible test questions to the user via hearing assist device 500. This can be accomplished solely by hearing assist device 500 or with assistance from a smartphone or other external device or service. For example, a user may respond to an audio (or textual) prompt “Can you hear this?” with a “yes” or “no” response. The response is received by microphone(s) 524 (or via touch input for example) and processed internally or on an assisting external device to identify the response. Depending on the user's response, the amplitude of the audio output can be adjusted to determine a given user's hearing threshold for each frequency (or frequency range). From this hearing efficiency and performance data, input frequency equalization can be performed by hearing assist device 500 so as to deliver to the user audio signals that will be perceived in much the same way as someone with no hearing impairment. In addition, such data can be delivered to the assisting external device (e.g., to a smartphone) for use by such device in producing audio output for the user. For example, the assisting device can deliver an adjusted audio output tailored for the user if (i) the user is not wearing hearing assist device 500, (ii) the battery power of hearing assist device 500 is depleted, (iii) hearing assist device 500 is powered down, or (iv) hearing assist device 500 is operating in a lower power mode. In such situations, the supporting device can deliver the audio signal: (a) in an audible form via a speaker which will be generated with intent of directly reaching the eardrum; (b) in an audible form intended for receipt and amplification control by hearing assist device 500 without further need for user specific audio equalization; and (c) in a non-audible form (e.g.) electromagnetic transmission for receipt and conversion to an audible form by hearing assist device 500 and again without further equalization.

After testing and setup, a wearer may further tweak their recommended equalization via slide bars and such in a manner similar to adjusting equalization for other conventional audio equipment. Such tweaking can be carried out via the supporting device user interface. In addition, a plurality of equalization settings can be supported with each being associated with a particular mode of operation of hearing assist device 500. That is conversation in a quiet room with one other might receive one equalization profile while a concert hall might receive another. Modes can be selected in many automatic or commanded ways via either or both hearing assist device 500 and the external supporting device. Automatic selection can be performed via analysis and classification of captured audio. Certain classifications may trigger selection of a particular mode. Commands may delivered via any user input interface such as voice input (voice recognized commands), tactile input commands, etc.

Audio modes also comprise alternate or additional audio processing techniques as well. For example, in one mode, to enhance audio perspective and directionality, delays might be selectively introduced (or increased in a stereoscopic manner) to enhance a wearer's ability to discern the location of an audio source. Sensor data may support automatic mode selection in such situations. Detecting walking impacts and outdoor GPS (Global Positioning System) location might automatically trigger such enhanced perspective mode. A medical condition might trigger another mode which attenuates environmental audio while delivering synthesized voice commands to the wearer. In another exemplary mode, both echoes and delays might be introduced to simulate a theater environment. For example, when audio is being sourced by a television channel broadcast of a movie, the theater environment mode might be selected. Such selection may be in response to a set top box, television or media player's commands or by identifying one of the same as the audio source.

Other similar and all of such functionality can be carried out by one or both of hearing assist device 500 and an external supporting device. When assisting the hearing aid device, the external supporting device may receive the audio for processing: (i) directly via built in microphones; (ii) from storage; or (iii) via yet another external device. Alternatively, the source audio may be captured by hearing assist device 500 itself and delivered via a wired or wireless pathway to the external supporting device for processing before delivery of either the processed audio signals or substitute audio back to hearing assist device 500 for delivery to the wearer.

Similarly, sensor data may be captured in one or both of hearing assist device 500 and an external supporting device. Sensor data captured by hearing assist device 500 may likewise be delivered via such or other wired or wireless pathways to the external supporting device for (further) processing. The external supporting device may then respond to the sensor data received and processed by delivering audio content and/or hearing aid commands back to hearing assist device 500. Such commands may be to reconfigure some aspect of hearing assist device 500 or manage communication or power delivery. Such audio content may be instructional, comprise queries, or consist of commands to be delivered the wearer via the ear drums. Sensor data may be stored and displayed in some form locally on the external supporting device along with similar audio, graphical or textual content, commands or queries. In addition, such sensor data can be further delivered to yet other external supporting devices for further processing, analysis and storage. Sensors within one or both hearing assist device 500 and an external supporting device may be medical sensors or environmental sensors (e.g., latitude/longitude, velocity, temperature, wearer's physical orientation, acceleration, elevation, tilt, humidity, etc.).

Although not shown, hearing assist device 500 may also be configured with an imager that may be located near transceiver 520. The imager can then be used to capture images or video that may be relayed to one or more external supporting device for real time display, storage or processing. For example, detecting a medical situation and no response to audible content queries delivered via hearing assist device 500, the imager can be commanded (internal or external command origin) to capture an image or a video sequence. Such imager output can be delivered to medical staff via a user's supporting smartphone so that a determination can be made as to the user's condition or the position/location of hearing assist device 500.

Inner ear temperature sensor 516 is a sensor that may be present to measure a temperature of the user. For instance, in an embodiment, upon receiving power, inner ear temperature sensor 516 may include a lens used to measure inner ear temperature. IR light may be reflected from the user skin by an IR light emitter, such as the ear canal or ear drum, and received by a single temperature sensor element, a one-dimensional array of temperature sensor elements, a two-dimensional array of temperature sensor elements, or other configuration of temperature sensor elements. Inner ear temperature sensor 516 may generate a sensor output signal (e.g., an electrical signal) that indicates a measured inner ear temperature.

Such a configuration may also be used to determine a distance to the user's ear drum. The IR light emitter and sensor may be used to determine a distance to the user's ear drum from hearing assist device 500, which may be used by processing logic to automatically control a volume of sound emitted from hearing assist device 500, as well as for other purposes. Furthermore, the IR light emitter/sensor may also be used as an imager that captures an image of the inside of the user's ear. This could be used to identify characteristics of vein structures inside the user's ear, for example. The IR light emitter/sensor could also be used to detect the user's heartbeat, as well as to perform further functions.

Position/motion sensor 518 includes one or more sensors that may be present to measure time of day, location, acceleration, orientation, vibrations, and/or other movement related characteristics of the user. For instance, position/motion sensor 518 may include one or more of a GPS (global positioning system) receiver (to measure user position), an accelerometer (to measure acceleration of the user), a gyroscope (to measure orientation of the head of the user), a magneto (to determine a direction the user is facing), a vibration sensor (for example, a micro-electromechanical system (MEMS) vibration sensor), or the like. Position/motion sensor 518 may be used for various benefits, including determining whether a user has fallen (e.g., based on measured position, acceleration, orientation, etc.), for local VoD, and many more benefits. Position/motion sensor 518 may generate a sensor output signal (e.g., an electrical signal) that indicates one or more of the measured time of day, location, acceleration, orientation, vibration, etc.

The sensor information indicated by position/motion sensor 518 and/or other sensors may be used for various purposes. For instance, position/motion information may be used to determine that the user has fallen down/collapsed. In response, voice and/or video assist (e.g., by a handheld device in communication with hearing assist device 500) may be used to gather feedback from the user (e.g., to find out if they are ok, and/or to further supplement the sensor data collection (which triggered the feedback request)). Such sensor data and feedback information, if warranted, can be automatically forwarded to medical staff, ambulance services, and/or family members, for example, as described elsewhere herein. The analysis of the data that triggered the forwarding process may be performed in whole or in part on one (or both) of hearing assist device 500, and/or on the assisting local device (e.g., a smart phone, tablet computer, set top box, TV, etc., in communication with a hearing assist device 500) and/or remote computing systems (e.g., at medical staff offices or as might be available through a cloud or portal service).

As shown in FIG. 5, forward IR/UV (ultraviolet) communication transceiver 520, BTLE antenna 522, microphone(s) 524, telecoil 526, tethered sensor port 528, WPT/NFC coil 530, switch 532, skin communication conductor 534, glucose spectroscopy sensor 536, a heart rate sensor 538, volume controller 540, and communication and power delivery coil 542 are located at different locations in/on the first portion 504 of hearing assist device 500. In alternative embodiments, one or more of these features may be located in/on different locations of hearing assist device 500.

Forward IR/UV communication transceiver 520 is a communication mechanism that may be present to enable communications with another device, such as a smart phone, computer, etc. Forward IR/UV communication transceiver 520 may receive information/data from processing logic of hearing assist device 500 to be transmitted to the other device in the form of modulated light (e.g., IR light, UV light, etc.), and may receive information/data in the form of modulated light from the other device to be provided to the processing logic of hearing assist device 500. Forward IR/UV communication transceiver 520 may enable low power communications for hearing assist device 500, to reduce a load on a battery of hearing assist device 500. In an embodiment, an emitter/receiver of forward IR/UV communication transceiver 520 may be positioned on housing 502 to be facing forward in a direction a wearer of hearing assist device 500 faces. In this manner, the forward IR/UV communication transceiver 520 may communicate with a device held by the wearer, such as a smart phone, a tablet computer, etc., to provide text to be displayed to the wearer, etc.

BTLE antenna 522 is a communication mechanism coupled to a Bluetooth™ transceiver in hearing assist device 500 that may be present to enable communications with another device, such as a smart phone, computer, etc. BTLE antenna 522 may receive information/data from processing logic of hearing assist device 500 to be transmitted to the other device according to the Bluetooth™ specification, and may receive information/data transmitted according to the Bluetooth™ specification from the other device to be provided to the processing logic of hearing assist device 500.

Microphone(s) 524 is a sensor that may be present to receive environmental sounds, including voice of the user, voice of other persons, and other sounds in the environment (e.g., traffic noise, music, etc.). Microphone(s) 524 may include any number of microphones, and may be configured in any manner, including being omni-directional (non-directional), directional, etc. Microphone(s) 524 generates an audio signal based on the received environmental sound that may be processed and/or filtered by processing logic of hearing assist device 500, may be stored in digital form in hearing assist device 500, may be transmitted from hearing assist device 500, and may be used in other ways.

Telecoil 526 is a communication mechanism that may be present to enable communications with another device. Telecoil 526 is an audio induction loop that enables audio sources to be directly coupled to hearing assist device 500 in a manner known to persons skilled in the relevant art(s). Telecoil 526 may be used with a telephone, a radio system, and induction loop systems that transmit sound to hearing aids.

Tethered sensor port 528 is a port that a remote sensor (separate from hearing assist device 500) may be coupled with to interface with hearing assist device 500. For instance, port 528 may be an industry standard or proprietary connector type. A remote sensor may have a tether (one or more wires) with a connector at an end that may be plugged into port 528. Any number of tethered sensor ports 528 may be present. Examples of sensor types that may interface with tethered sensor port 528 include brainwave sensors (e.g., electroencephalography (EEG) sensors that record electrical activity along the scalp according to EEG techniques) attached to the user's scalp, heart rate/arrhythmia sensors attached to a chest of the user, etc.

WPT/NFC coil 530 is a communication mechanism coupled to a NFC transceiver in hearing assist device 500 that may be present to enable communications with another device, such as a smart phone, computer, etc., as described above with respect to NFC transceiver 110 (FIG. 1).

Switch 532 is a switching mechanism that may be present on housing 502 to perform various functions, such as switching power on or off, switching between different power and/or operational modes, etc. A user may interact with switch 532 to switch power on or off, to switch between modes, etc. Switch 532 may be any type of switch, including a toggle switch, a push button switch, a rocker switch, a three-(or greater) position switch, a dial switch, etc.

Skin communication conductor 534 is a communication mechanism coupled to a transceiver in hearing assist device 500 that may be present to enable communications with another device, such as a smart phone, computer, etc., through skin of the user. For instance, skin communication conductor 534 may enable communications to flow between hearing assist device 500 and a smart phone held in the hand of the user, a second hearing assist device worn on an opposite ear of the user, a pacemaker or other device implanted in the user, or other communications device in communication with skin of the user. A transceiver of hearing assist device 500 may receive information/data from processing logic to be transmitted from skin communication conductor 534 through the user's skin to the other device, and the transceiver may receive information/data at skin communication conductor 534 that was transmitted from the other device through the user's skin to be provided to the processing logic of hearing assist device 500.

Glucose spectroscopy sensor 536 is a sensor that may be present to measure a glucose level of the user using spectroscopy techniques in a manner known to persons skilled in the relevant art(s). Such a measurement may be valuable in determining whether a user has diabetes. Such a measurement can also be valuable in helping a diabetic user determine whether insulin is needed, etc. (e.g., hypoglycemia or hyperglycemia). Glucose spectroscopy sensor 536 may be configured to monitor glucose in combination with subcutaneous sensor 544. As shown in FIG. 5, subcutaneous sensor 544 is shown separate from, and proximate to hearing assist device 500. In an alternative embodiment, subcutaneous sensor 544 may be located in/on hearing assist device 500. Subcutaneous sensor 544 is a sensor that may be present to measure any attribute of a user's health, characteristics or status. For example, subcutaneous sensor 544 may be a glucose sensor implanted under the skin behind the ear so as to provide a reasonably close mating location with communication and power delivery coil 542. When powered, glucose spectroscopy sensor 536 may measure the user glucose level with respect to subcutaneous sensor 544, and may generate a sensor output signal (e.g., an electrical signal) that indicates a glucose level of the user.

Heart rate sensor 538 is a sensor that may be present to measure a heart rate of the user. For instance, in an embodiment, upon receiving power, heart rate sensor 538 may pressure changes with respect to a blood vessel in the ear, or may measure heart rate in another manner such as changes in reflectivity or otherwise as would be known to persons skilled in the relevant art(s). Missed beats, elevated heart rate, and further heart conditions may be detected in this manner. Heart rate sensor 538 may generate a sensor output signal (e.g., an electrical signal) that indicates a measured heart rate. In addition, subcutaneous sensor 544 might comprise at least a portion of an internal heart monitoring device which communicates via communication and power delivery coil 542 heart status information and data. Subcutaneous sensor 544 could also be associated with or be part of a pacemaker or defibrillating implant, insulin pump, etc.

Volume controller 540 is a user interface mechanism that may be present on housing 502 to enable a user to modify a volume at which sound is broadcast from speaker 512. A user may interact with volume controller 520 to increase or decrease the volume. Volume controller 540 may be any suitable controller type (e.g., a potentiometer), including a rotary volume dial, a thumb wheel, etc.

Instead of supporting both power delivery and communications, communication and power delivery coil 542 may be dedicated to one or the other. For example, such coil may only support power delivery (if needed to charge or otherwise deliver power to subcutaneous sensor 544), and can be replaced with any other type of communication system that supports communication with subcutaneous sensor 544. It is noted that the coils/antennas of hearing assist device 500 may be separately included in hearing assist device 500, or in embodiments, two or more of the coils/antennas may be combined as a single coil/antenna.

The processing logic of hearing assist device 500 may be operable to set up/configure and adaptively reconfigure each of the sensors of hearing assist device 500 based on an analysis of the data obtained by such sensor as well as on an analysis of data obtained by other sensors. For example, a first sensor of hearing assist device 500 may be configured to operate at one sampling rate (or sensing rate) which is analyzed periodically or continuously. Furthermore, a second sensor of hearing assist device 500 can be in a sleep or power down mode to conserve battery power. When a threshold is exceeded or other triggering event occurs, such first sensor can be reconfigured by the processing logic of hearing assist device 500 to sample at a higher rate or continuously and the second sensor can be powered up and configured. Additionally, multiple types of sensor data can be used to construct or derive single conclusions. For example, heart rate can be gathered multiple ways (via multiple sensors) and combined to provide a more robust and trustworthy conclusion. Likewise, a combination of data obtained from different sensors (e.g., pH plus temperature plus horizontal posture plus impact detected plus weak heart rate) may result in an ambulance being called or indicate a possible heart attack. Or, if glucose is too high, hyperglycemia may be indicated while if glucose it too low, hypoglycemia may be indicated. Or, if glucose and heart data is acceptable, then a stroke may be indicated. This processing can be done in whole or in part within hearing assist device 500 with audio content being played to the wearer thereof to gather further voiced information from the wearer to assist in conclusions or to warn the wearer.

FIG. 6 shows a hearing assist device 600 that is an example of hearing assist device 102 according to an exemplary embodiment. Hearing assist device 600 is configured to be at least partially inserted into the ear canal of a user (for example, an ear bud). A user may wear a single hearing assist device 600 on one ear, or may simultaneously wear first and second hearing assist devices 600 on the user's right and left ears, respectively.

As shown in FIG. 6, hearing assist device 600 includes a case or housing 602 that has a generally cylindrical shape, and includes a first portion 604, a second portion 606, and a third portion 608. First portion 604 is shaped to be inserted at least partially into the ear canal of the user. Second portion 606 extends coaxially from first portion 604. Third portion 608 is a handle that extends from second portion 606. A user grasps third portion 608 to extract hearing assist device 600 from the ear of the user.

As shown in FIG. 6, hearing assist device 600 further includes pH sensor 510, speaker 512, IR (infrared) or sonic distance sensor 514, inner ear temperature sensor 516, and an antenna 610. pH sensor 510, speaker 512, IR (infrared) or sonic distance sensor 514, inner ear temperature sensor 516 may function and be configured similarly as described above. Antenna 610 may be include one or more coils or other types of antennas to function as any one or more of the coils/antennas described above with respect to FIG. 5 and/or elsewhere herein (e.g., an NFC antenna, a Bluetooth™ antenna, etc.).

It is noted that antennas, such as coils, mentioned herein may be implemented as any suitable type of antenna, including a coil, a microstrip antenna, or other antenna type. Although further sensors, communication mechanisms, switches, etc., of hearing assist device 500 of FIG. 5 are not shown included in hearing assist device 600, one or more further of these features of hearing assist device 500 may additionally and/or alternatively be included in hearing assist device 600. Furthermore, sensors that are present in a hearing assist device may all operate simultaneously, or one or more sensors may be run periodically, and may be off at other times (e.g., based on an algorithm in program code, etc.). By running fewer sensors at any one time, battery power may be conserved. Note that in addition to one or more of sensor data compression, analysis, encryption, and processing, sensor management (duty cycling, continuous operations, threshold triggers, sampling rates, etc.) can be performed in whole or in part in any one or both hear assist devices, the assisting local device (e.g., smart phone, tablet computer, set top box, TV, etc.), and/or remote computing systems (at medical staff offices or as might be available through a cloud or portal service).

Hearing assist devices 102, 500, and 600 may be configured in various ways with circuitry to process sensor information, and to communicate with other devices. The next section describes some example circuit embodiments for hearing assist devices, as well as processes for communicating with other devices, and for further functionality.

IV. Example Hearing Assist Device Circuit and Process Embodiments

According to embodiments, hearing assist devices may be configured in various ways to perform their functions. For instance, FIG. 7 shows a circuit block diagram of a hearing assist device 700 that is configured to communicate with external devices according to multiple communication schemes, according to an exemplary embodiment. Hearing assist devices 102, 500, and 600 may each be implemented similarly to hearing assist device 700, according to embodiments.

As shown in FIG. 7, hearing assist device 700 includes a plurality of sensors 702a-702c, processing logic 704, a microphone 706, an amplifier 708, a filter 710, an analog-to-digital (A/D) converter 712, a speaker 714, an NFC coil 716, an NFC transceiver 718, an antenna 720, a Bluetooth™ transceiver 722, a charge circuit 724, a battery 726, a plurality of sensor interfaces 728a-728c, and a digital-to-analog (D/A) converter 764. Processing logic 704 includes a digital signal processor (DSP) 730, a central processing unit (CPU) 732, and a memory 734. Sensors 702a-702c, processing logic 704, amplifier 708, filter 710, A/D converter 712, NFC transceiver 718, Bluetooth™ transceiver 722, charge circuit 724, sensor interfaces 728a-728c, D/A converter 764, DSP 730, CPU 732, may each be implemented in the form of hardware (e.g., electrical circuits, digital logic, etc.) or a combination of hardware and software/firmware. The features of hearing assist device 700 shown in FIG. 7 are described as follows.

For instance, hearing aid functionality of hearing assist device 700 is first described. In FIG. 7, microphone 706, amplifier 708, filter 710, A/D converter 712, processing logic 704, D/A converter 764, and speaker 714 provide at least some of the hearing aid functionality of hearing assist device 700. Microphone 706 is a sensor that receives environmental sounds, including voice of the user of hearing assist device 700, voice of other persons, and other sounds in the environment (e.g., traffic noise, music, etc.). Microphone 706 may be configured in any manner, including being omni-directional (non-directional), directional, etc., and may include one or more microphones. Microphone 706 may be a miniature microphone conventionally used in hearing aids, as would be known to persons skilled in the relevant art(s), or may be another suitable type of microphone. Microphone(s) 524 (FIG. 5) is an example of microphone 706. Microphone 706 generates a received audio signal 740 based on the received environmental sound.

Amplifier 708 receives and amplifies received audio signal 740 to generate an amplified audio signal 742. Amplifier 708 may be any type of amplifier, including a low-noise amplifier for amplifying low level signals. Filter 710 receives and processes amplified audio signal 742 to generate a filtered audio signal 744. Filter 710 may be any type of filter, including being a filter configured to filter out noise, other high frequencies, and/or other frequencies as desired. A/D converter 712 receives filtered audio signal 742, which may be an analog signal, and converts filtered audio signal 742 to digital form, to generate a digital audio signal 746. A/D converter 712 may be configured in any manner, including as a conventional A/D converter.

Processing logic 704 receives digital audio signal 746, and may process digital audio signal 746 in any manner to generate processed digital audio signal 762. For instance, as shown in FIG. 7, DSP 730 may receive digital audio signal 746, and may perform digital signal processing on digital audio signal 746 to generate processed digital audio signal 762. DSP 730 may be configured in any manner, including as a conventional DSP known to person skilled in the relevant art(s), or in another manner. DSP 730 may perform any suitable type of digital signal processing to process/filter digital audio signal 746, including processing digital audio signal 746 in the frequency domain to manipulate the frequency spectrum of digital audio signal 746 (e.g., according to Fourier transform/analysis techniques, etc.). DSP 730 may amplify particular frequencies, may attenuate particular frequencies, and may otherwise modify digital audio signal 746 in the discrete domain. DSP 730 may perform the signal processing for various reasons, including noise cancellation or hearing loss compensation. For instance, DSP 730 may process digital audio signal 746 to compensate for a personal hearing frequency response of the user, such as compensating for poor hearing of high frequencies, middle range frequencies, or other personal frequency response characteristics of the user.

In one embodiment, DSP 730 may be pre-configured to process digital audio signal 746. In another embodiment, DSP 730 may receive instructions from CPU 732 regarding how to process digital audio signal 746. For instance, CPU 732 may access one or more DSP configurations in stored in memory 734 (e.g., in other data 768) that may be provided to DSP 730 to configure DSP 730 for digital signal processing of digital audio signal 746. For instance, CPU 732 may select a DSP configuration based on a hearing assist mode selected by a user of hearing assist device 700 (e.g., by interacting with switch 532, etc.).

As shown in FIG. 7, D/A converter 764 receives processed digital audio signal 762, and converts processed digital audio signal 762 to digital form, generating processed audio signal 766. D/A converter 764 may be configured in any manner, including as a conventional D/A converter. Speaker 714 receives processed audio signal 766, and broadcasts sound generated based on processed audio signal 766 into the ear of the user. The user is enabled to hear the broadcast sound, which may be amplified, filtered, and/or otherwise frequency manipulated with respect to the sound received by microphone 706. Speaker 714 may be a miniature speaker conventionally used in hearing aids, as would be known to persons skilled in the relevant art(s), or may be another suitable type of speaker. Speaker 512 (FIG. 5) is an example of speaker 714. Speaker 714 may include one or more speakers.

Hearing assist device 700 of FIG. 7 is further described as follows with respect to FIGS. 8-14. FIG. 8 shows a flowchart 800 of a process for a hearing assist device that processes and transmits sensor data and receives a command from a second device, according to an exemplary embodiment. In an embodiment, hearing assist device 700 (as well as any of hearing assist devices 102, 500, and 600) may perform flowchart 800. Further structural and operational embodiments will be apparent to persons skilled in the relevant art(s) based on the following description of flowchart 800 and hearing assist device 700.

Flowchart 800 begins with step 802. In step 802, a sensor output signal is received from a medical sensor of the hearing assist device that senses a characteristic of the user. For example, as shown in FIG. 7, sensors 702a-702c may each sense/measure information about a health characteristic of the user of hearing assist device 700. Sensors 702a-702c may each be one of the sensors shown in FIGS. 5 and 6, and/or mentioned elsewhere herein. Although three sensors are shown in FIG. 7 for purposes of illustration, other numbers of sensors may be present in hearing assist device 700, including one sensor, two sensors, or greater numbers of sensors. Sensors 702a-702c each may generate a corresponding sensor output signal 758a-758c (e.g., an electrical signal) that indicates the measured information about the corresponding health characteristic. For instance, sensor output signals 758a-758c may be analog or digital signals having levels or values corresponding to the measured information.

Sensor interfaces 728a-728c are each optionally present, depending on whether the corresponding sensor outputs a sensor output signal that needs to be modified to be receivable by CPU 732. For instance, each of sensor interfaces 728a-728c may include an amplifier, filter, and/or A/D converter (e.g., similar to amplifier 708, filter 710, and A/D converter 712) that respectively amplify (e.g., increase or decrease), reduces particular frequencies, and/or convert to digital form the corresponding sensor output signal. Sensor interfaces 728a-728c (when present) respectively output modified sensor output signals 760a-760c.

In step 804, the sensor output signal is processed to generate processed sensor data. For instance, as shown in FIG. 7, processing logic 704 receives modified sensor output signals 760a-760c. Processing logic 704 may process modified sensor output signals 760a-760c in any manner to generate processed sensor data. For instance, as shown in FIG. 7, CPU 732 may receive modified sensor output signals 760a-760c. CPU 732 may process the sensor information in one or more of modified sensor output signals 760a-760c to generate processed sensor data. For instance, CPU 732 may manipulate the sensor information (e.g., according to an algorithm of code 738) to convert the sensor information into a presentable form (e.g., scaling the sensor information, adding or subtracting a constant to/from the sensor information, etc.). Furthermore, CPU 732 may transmit the sensor information of modified sensor output signals 760a-760c to DSP 730 to be digital signal processed by DSP 730 to generate processed sensor data, and may receive the processed sensor data from DSP 730. The processed and/or raw (unprocessed) sensor data may optionally be stored in memory 734 (e.g., as sensor data 736).

In step 806, the processed sensor data is wirelessly transmitted from the hearing assist device to a second device. For instance, as shown in FIG. 7, CPU 732 may provide the sensor data (processed or raw) (e.g., from CPU registers, from DSP 730, from memory 734, etc.) to a transceiver to be transmitted from hearing assist device 700. In the embodiment of FIG. 7, hearing assist device 700 includes an NFC transceiver 718 and a BT transceiver 722, which may each be used to transmit sensor data from hearing assist device 700. In alternative embodiments, hearing assist device 700 may include one or more additional and/or alternative transceivers that may transmit sensor data from hearing assist device 700, including a Wi-Fi transceiver, a forward IR/UV communication transceiver (e.g., transceiver 520 of FIG. 5), a telecoil transceiver (which may transmit via telecoil 526), a skin communication transceiver 534 (which may transmit via skin communication conductor 534), etc. The operation of such alternative transceivers will become apparent to persons skilled in the relevant art(s) based on the teachings provided herein.

As shown in FIG. 7, NFC transceiver 718 may receive an information signal 740 from CPU 732 that includes sensor data for transmitting. In an embodiment, NFC transceiver 718 may modulate the sensor data onto NFC antenna signal 748 to be transmitted from hearing assist device 700 by NFC coil 716 when NFC coil 716 is energized by an RF field generated by a second device.

Similarly, BT transceiver 722 may receive an information signal 754 from CPU 732 that includes sensor data for transmitting. In an embodiment, BT transceiver 722 may modulate the sensor data onto BT antenna signal 752 to be transmitted from hearing assist device 700 by antenna 720 (e.g., BTLE antenna 522 of FIG. 5), according to a Bluetooth™ communication protocol or standard.

In embodiments, a hearing assist device may communicate with one or more other devices to provide sensor data and/or other information, and to receive information. For instance, FIG. 9 shows a communication system 900 that includes a hearing assist device communicating with other communication devices, according to an exemplary embodiment. As shown in FIG. 9, communication system 900 includes hearing assist device 700, a mobile computing device 902, a stationary computing device 904, and a server 906. System 900 is described as follows.

Mobile computing device 902 (for example, a local supporting device) is a device capable of communicating with hearing assist device 700 according to one or more communication techniques. For instance, as shown in FIG. 9, mobile computing device 902 includes a telecoil 910, one or more microphones 912, an IR/UV communication transceiver 914, a WPT/NFC coil 916, and a Bluetooth™ antenna 918. In embodiments, mobile computing device 902 may include one or more of these features and/or alternative or additional features (e.g., communication mechanisms, etc.). Mobile computing device 902 may be any type of mobile electronic device, including a personal digital assistant (PDA), a laptop computer, a notebook computer, a tablet computer (e.g., an Apple iPad™), a netbook, a mobile phone (e.g., a cell phone, a smart phone, etc.), a special purpose medical device, etc. The features of mobile computing device 902 shown in FIG. 9 are described as follows.

Telecoil 910 is a communication mechanism that may be present to enable mobile computing device 902 to communicate with hearing assist device 700 via a telecoil (e.g., telecoil 526 of FIG. 5). For instance, telecoil 910 and an associated transceiver may enable mobile computing device 902 to couple audio sources and/or other communications to hearing assist device 700 in a manner known to persons skilled in the relevant art(s).

Microphone(s) 912 may be present to receive voice of a user of mobile computing device 902. For instance, the user may provide instructions for mobile computing device 902 and/or for hearing assist device 700 by speaking into microphone(s) 912. The received voice may be transmitted to hearing assist device 700 (in digital or analog form) according to any communication mechanism, or may be converted into data and/or commands to be provided to hearing assist device 700 to cause functions/actions in hearing assist device 700. Microphone(s) 912 may include any number of microphones, and may be configured in any manner, including being omni-directional (non-directional), directional, etc.

IR/UV communication transceiver 914 is a communication mechanism that may be present to enable communications with hearing assist device 700 via an IR/UV communication transceiver of hearing assist device 700 (e.g., forward IR/UV communication transceiver 520 of FIG. 5). IR/UV communication transceiver 914 may receive information/data from and/or transmit information/data to hearing assist device 700 (e.g., in the form of modulated light, as described above).

WPT/NFC coil 916 is an NFC antenna coupled to a NFC transceiver in mobile computing device 902 that may be present to enable NFC communications with an NFC communication mechanism of hearing assist device 700 (e.g., NFC transceiver 110 of FIG. 1, NFC coil 530 of FIG. 5). WPT/NFC coil 916 may be used to receive information/data from and/or transmit information/data to hearing assist device 700.

Bluetooth™ antenna 918 is a communication mechanism coupled to a Bluetooth™ transceiver in mobile computing device 902 that may be present to enable communications with hearing assist device 700 (e.g., BT transceiver 722 and antenna 720 of FIG. 7). Bluetooth™ antenna 918 may be used to receive information/data from and/or transmit information/data to hearing assist device 700.

As shown in FIG. 9, mobile computing device 902 and hearing assist device 700 may exchange communication signals 920 according to any communication mechanism/protocol/standard mentioned herein or otherwise known. According to step 806, hearing assist device 700 may wirelessly transmit sensor data to mobile computing device 902.

Stationary computing device 904 (for example, a local supporting device) is also a device capable of communicating with hearing assist device 700 according to one or more communication techniques. For instance, stationary computing device 904 may be capable of communicating with hearing assist device 700 according to any of the communication mechanisms shown for mobile computing device 902 in FIG. 9, and/or according to other communication mechanisms/protocols/standards described elsewhere herein or otherwise known. Stationary computing device 904 may be any type of stationary electronic device, including a desktop computer (e.g., a personal computer, etc.), a docking station, a set top box, a gateway device, an access point, special purpose medical equipment, etc.

As shown in FIG. 9, stationary computing device 904 and hearing assist device 700 may exchange communication signals 922 according to any communication mechanism/protocol/standard mentioned herein or otherwise known. According to step 806, hearing assist device 700 may wirelessly transmit sensor data to stationary computing device 904.

It is noted that mobile computing device 902 (and/or stationary computing device 904) may communicate with server 906 (for example, a remote supporting device, a third device). For instance, as shown in FIG. 9, mobile computing device (and/or stationary computing device 904) may be communicatively coupled with server 906 by network 908. Network 908 may be any type of communication network, including a local area network (LAN), a wide area network (WAN), a personal area network (PAN), a phone network (e.g., a cellular network, a land based network), or a combination of communication networks, such as the Internet. Network 908 may include wired and/or wireless communication pathway(s) implemented using any of a wide variety of communication media and associated protocols. For example, such communication pathway(s) may comprise wireless communication pathways implemented via radio frequency (RF) signaling, infrared (IR) signaling, or the like. Such signaling may be carried out using long-range wireless protocols such as WIMAX® (IEEE 802.16) or GSM (Global System for Mobile Communications), medium-range wireless protocols such as WI-FI® (IEEE 802.11), and/or short-range wireless protocols such as BLUETOOTH® or any of a variety of IR-based protocols. Such communication pathway(s) may also comprise wired communication pathways established over twisted pair, Ethernet cable, coaxial cable, optical fiber, or the like, using suitable communication protocols therefor. It is noted that security protocols (e.g., private key exchange, etc.) may be used to protect sensitive health information that is communicated by hearing assist device 700 to and from remote devices.

Server 906 may be any computer system, including a stationary computing device, a server computer, a mobile computing device, etc. Server 906 may include a web service, an API (application programming interface), or other service or interface for communications.

Sensor data and/or other information may be transmitted (for example, relayed) to server 906 over network 908 to be processed. After such processing, in response, server 906 may transmit processed data, instructions, and/or other information through network 908 to mobile computing device 902 (and/or stationary computing device 904) to be transmitted to hearing assist device 700 to be stored, to cause a function/action at hearing assist device 700, and/or for other reason.

Referring back to FIG. 8, in step 808, at least one command is received from the second device at the hearing assist device. For instance, referring to FIG. 7, hearing assist device 700 may receive a command wirelessly transmitted in a communication signal from a second device at NFC coil 716, antenna 720, or other antenna or communication mechanism at hearing assist device 700. In the example of NFC coil 716, the command may be transmitted from NFC coil 716 on NFC antenna signal 748 to NFC transceiver 718. NFC transceiver 718 may demodulate command data from the received communication signal, and provide the command to CPU 732. In the example of antenna 720, the command may be transmitted from antenna 720 on BT antenna signal 752 to BT transceiver 722. BT transceiver 722 may demodulate command data from the received communication signal, and provide the command to CPU 732.

CPU 732 may execute the received command. The received command may cause hearing assist device 700 to perform one or more functions/actions. For instance, in embodiments, the command may cause hearing assist device 700 to turn on or off, to change modes, to activate or deactivate one or more sensors, to wirelessly transmit further information, to execute particular program code (e.g., stored as code 738 in memory 734), to play a sound (e.g., an alert, a tone, a beeping noise, pre-recorded or synthesized voice, etc.) from speaker 714 to the user to inform the user of information and/or cause the user to perform a function/action, and/or cause one or more additional and/or alternative functions/actions to be performed by hearing assist device 700. Further examples of such commands and functions/actions are described elsewhere herein.

In embodiments, a hearing assist device may be configured to convert received RF energy into charge for storage in a battery of the hearing assist device. For instance, as shown in FIG. 7, hearing assist device 700 includes charge circuit 724 for charging battery 726, which is a rechargeable battery (e.g., rechargeable battery 114). In an embodiment, charge circuit 724 may operate according to FIG. 10. FIG. 10 shows a flowchart 1000 of a process for a wirelessly charging a battery of a hearing assist device, according to an exemplary embodiment. Flowchart 1000 is described as follows.

In step 1002 of flowchart 1000, a radio frequency signal is received. For example, as shown in FIG. 7, NFC coil 716, antenna 720, and/or other antenna or coil of hearing assist device 700 may receive a radio frequency (RF) signal. The RF signal may be a communication signal that includes data (e.g., modulated on the RF signal), or may be an un-modulated RF signal. Charge circuit 724 may be coupled to one or more of NFC coil 716, antenna 720, or other antenna to receive the RF signal.

In step 1004, a charge current is generated that charges a rechargeable battery of the hearing assist device based on the received radio frequency signal. In an embodiment, charge circuit 724 is configured to generate a charge current 756 that is used to charge battery 726. Charge circuit 724 may be configured in various ways to convert a received RF signal to a charge current. For instance, charge circuit 724 may include an induction coil to take power from an electromagnetic field and convert it to electrical current. Alternatively, charge circuit 724 may include a diode rectifier circuit that rectifies the received RF signal to a DC (direct current) signal, and may include one or more charge pump circuits coupled to the diode rectifier circuit used to create a higher voltage value from the DC signal. Alternatively, charge circuit 724 may be configured in other ways to generate charge current 756 from a received RF signal.

In this manner, hearing assist device 700 may maintain power for operation, with battery 726 being charged periodically by RF fields generated by other devices, rather than needing to physically replace batteries.

In another embodiment, hearing assist device 700 may be configured to generate sound based on received sensor data. For instance, hearing assist device 700 may operate according to FIG. 11. FIG. 11 shows a flowchart 1100 of a process for generating and broadcasting sound based on sensor data, according to an exemplary embodiment. For purposes of illustration, flowchart 1100 is described as follows with reference to FIG. 7.

Flowchart 1100 begins with step 1102. In step 1102, an audio signal is generated based at least on the processed sensor data. For instance, as described above with respect to steps 802 and 804 of flowchart 800 (FIG. 8), a sensor output signal may be processed to generate processed sensor data. The processed sensor data may be stored in memory 736 as sensor data 736, may be held in registers in CPU 732, or may be present in another location. Audio data for one or more sounds (e.g., tones, beeping sounds, voice segments, etc.) may be stored in memory 734 (e.g., as other data 768) that may be selected for play to the user based on particular sensor data (e.g., particular values of sensor data, etc.). CPU 732 or DSP 730 may select the audio data corresponding to particular sensor data from memory 734. Alternatively, CPU 732 may transmit a request for the audio data from another device using a communication mechanism (e.g., NFC transceiver 718, BT transceiver 722, etc.). DSP 730 may receive the audio data from CPU 732, from memory 734, or from another device, and may generate processed digital audio signal 762 based thereon.

In step 1104, sound is generated based on the audio signal, the sound broadcast from a speaker of the hearing assist device into the ear of the user. For instance, as shown in FIG. 7, D/A converter 764 may be present, and may receive processed digital audio signal 762. D/A converter 764 may convert processed digital audio signal 762 to digital form to generate processed audio signal 766. Speaker 714 receives processed audio signal 766, and broadcasts sound generated based on processed audio signal 766 into the ear of the user.

In this manner, sounds may be provided to the user by hearing assist device 700 based at least on sensor data, and optionally further based on additional information. The sounds may provide information to the user, and may remind or instruct the user to perform a function/action. The sounds may include one or more of a tone, a beeping sound, or a voice that includes at least one of a verbal instruction to the user, a verbal warning to the user, or a verbal question to the user. For instance, a tone or a beeping sound may be provided to the user as an alert based on particular values of sensor data (e.g., indicating a high glucose/blood sugar value), and/or a voice instruction may be provided to the user as the alert based on the particular values of sensor data (e.g., a voice segment stating “Blood sugar is low—Insulin is required” or “hey, your heart rate is 80 beats per minute, your heart is fine, your pacemaker has got 6 hours of battery left.”).

In another embodiment, hearing assist device 700 may be configured to generate filtered environmental sound. For instance, hearing assist device 700 may operate according to FIG. 12. FIG. 12 shows a flowchart 1200 of a process for generating and broadcasting filtered sound from a hearing assist device, according to an exemplary embodiment. For purposes of illustration, flowchart 1200 is described as follows with reference to FIG. 7.

Flowchart 1200 begins with step 1202. In step 1202, an audio signal is generated based on environmental sound received by at least one microphone of the hearing assist device. For instance, as shown in FIG. 7, microphone 706 may generate a received audio signal 740 based on received environmental sound. Received audio signal 740 may optionally be amplified, filtered, and converted to digital form to generate digital audio signal 746, as shown in FIG. 7.

In step 1204, one or more frequencies of the audio signal are selectively favored to generate a modified audio signal. As shown in FIG. 7, DSP 730 may receive digital audio signal 746, and may perform digital signal processing on digital audio signal 746 to generate processed digital audio signal 762. DSP 730 may favor one or more frequencies by amplifying particular frequencies, attenuate particular frequencies, and/or by otherwise filtering digital audio signal 746 in the discrete domain. DSP 730 may perform the signal processing for various reasons, including noise cancellation or hearing loss compensation. For instance, DSP 730 may process digital audio signal 746 to compensate for a personal hearing frequency response of the user, such as compensating for poor hearing of high frequencies, middle range frequencies, or other personal frequency response characteristics of the user.

In step 1206, sound is generated based on the modified audio signal, the sound broadcast from a speaker of the hearing assist device into the ear of the user. For instance, as shown in FIG. 7, D/A converter 764 may be present, and may receive processed digital audio signal 762. D/A converter 764 may convert processed digital audio signal 762 to digital form to generate processed audio signal 766. Speaker 714 receives processed audio signal 766, and broadcasts sound generated based on processed audio signal 766 into the ear of the user.

In this manner, environmental noise, voice, and other sounds may be tailored to a particular user's personal hearing frequency response characteristics. Furthermore, particular noises in the environment may be attenuated (e.g., road noise, engine noise, etc.) to be filtered from the received environmental sounds so that the user may better hear important or desired sounds. Furthermore, sounds that are desired to be heard (e.g., music, a conversation, a verbal warning, verbal instructions, sirens, sounds of a nearby car accident, etc.) may be amplified so that the user may better hear them.

In another embodiment, hearing assist device 700 may be configured to transmit recorded voice of a user to another device. For instance, hearing assist device 700 may operate according to FIG. 13. FIG. 13 shows a flowchart 1300 of a process for generating an information signal in a hearing assist device based on a voice of a user, and for transmitting the information signal to a second device, according to an exemplary embodiment. For purposes of illustration, flowchart 1300 is described as follows with reference to FIG. 7.

Flowchart 1300 begins with step 1302. In step 1302, an audio signal is generated based on a voice of the user received at a microphone of the hearing assist device. For instance, as shown in FIG. 7, microphone 706 may generate a received audio signal 740 based on received voice of the user. Received audio signal 740 may optionally be amplified, filtered, and converted to digital form to generate digital audio signal 746, as shown in FIG. 7.

The voice of the user may be any statement made by the user, including a question, a statement of fact, a command, or any other verbal sequence. For instance, the user may ask “what is my heart rate”. All such statements made by the user can be those intended for capture by one or more hearing assist devices, supporting local and remote systems. Such statements may also include unintentional sounds such as semi-lucid ramblings, moaning, choking, coughing, and/or other sounds. Any one or more of the hearing assist devices and the supporting local device can receive (via microphones) such audio and forward the audio from the hearing assist device(s) as needed for further processing. This processing may include voice and/or sound recognition, comparisons with command words or sequences, (video, audio) prompting for (gesture, tactile or audible) confirmation, carrying out commands, storage for later analysis or playback, and/or forwarding to an appropriate recipient system for further processing, storage, and/or presentations to others.

In step 1304, an information signal is generated based on the audio signal. As shown in FIG. 7, DSP 730 may receive digital audio signal 746. In an embodiment, DSP 730 and/or CPU 732 may generate an information signal from digital audio signal 746 to be transmitted to a second device from hearing assist device 700. DSP 730 and/or CPU 732 may optionally perform voice/speech recognition on digital audio signal 746 to recognize spoken words included therein, and may include the spoken words in the generated information signal.

For instance, in an embodiment, code 738 stored in memory 734 may include a voice recognition program that may be executed by CPU 732 and/or DSP 730. The voice recognition program may use conventional or proprietary voice recognition techniques. Furthermore, such voice recognition techniques may be augmented by sensor data. For instance, as described above, position/motion sensor 518 may include a vibration sensor. The vibration sensor may detect vibrations of the user associated with speaking (e.g., jaw movement of the wearer during talking), and generates corresponding vibration information/data. The vibration information output by the vibration sensor may be received by CPU 732 and/or DSP 730, and may be used to aid in improving speech/voice recognition performed by the voice recognition program. For instance, the vibration information may be used by the voice recognition program to detect breaks between words, to identify the location of spoken syllables, to identify the syllables themselves, and/or to better perform other aspects of voice recognition. Alternatively, the vibration information may be transmitted from hearing assist device 700, along with the information signal, to a second device to perform the voice recognition process at the second device (or other device).

In step 1306, the generated information signal is transmitted to the second device. For instance, as shown in FIG. 7, CPU 732 may provide the information signal (e.g., from CPU registers, from DSP 730, from memory 734, etc.) to a transceiver to be transmitted from hearing assist device 700 (e.g., NFC transceiver 718, BT transceiver 722, or other transceiver).

Another device, such as mobile computing device 902, stationary computing device 904, or server 906, may receive the transmitted voice information, and may analyze the voice (spoken words, moans, slurred words, etc.) therein to determine one or more functions/actions to be performed. As a result, one or more functions/actions may be determined to be performed by hearing assist device 700 or another device.

In another embodiment, hearing assist device 700 may be configured to enable voice to be received and/or generated to be played to the user. For instance, hearing assist device 700 may operate according to FIG. 14. FIG. 14 shows a flowchart 1400 of a process for generating voice to be broadcast to a user, according to an exemplary embodiment. For purposes of illustration, flowchart 1400 is described as follows with reference to FIG. 7.

Flowchart 1400 begins with step 1402. In step 1402, a sensor output signal is received from a medical sensor of the hearing assist device that senses a characteristic of the user. Similarly to step 802 of FIG. 8, sensors 702a-702c each sense/measure information about a health characteristic of the user of hearing assist device 700. For instance, sensor 702a may sense a characteristic of the user (e.g., a heart rate, a blood pressure, a glucose level, a temperature, etc.). Sensors 702a generates sensor output signal 758a, which indicates the measured information about the corresponding health characteristic. Sensor interface 728a, when present, may convert sensor output signal 758a to modified sensor output signal 760a, to be received by processing logic.

In step 1404, processed sensor data is generated based on the sensor output signal. Similarly to step 804 of FIG. 8, processing logic 704 receives modified sensor output signal 760a, and may process modified sensor output signal 760a in any manner. For instance, as shown in FIG. 7, CPU 732 may receive modified sensor output signal 760a, and may process the sensor information contained therein to generate processed sensor data. For instance, CPU 732 may manipulate the sensor information (e.g., according to an algorithm of code 738) to convert the sensor information into a presentable form (e.g., scaling the sensor information, adding or subtracting a constant to/from the sensor information, etc.), or may otherwise process the sensor information. Furthermore, CPU 732 may transmit the sensor information of modified sensor output signal 760a to DSP 730 to be digital signal processed.

In step 1406, a voice audio signal generated based at least on the processed sensor data is received. In an embodiment, the processed sensor data generated in step 1404 may be transmitted from hearing assist device 700 to another device (e.g., as shown in FIG. 9), and a voice audio signal may be generated at the other device based on the processed sensor data. In another embodiment, the voice audio signal may be generated by processing logic 704 based on the processed sensor data. The voice audio signal contains voice information (e.g., spoken words) that relate to the processed sensor data. For instance, the voice information may include a verbal alert, verbal instructions, and/or other verbal information to be provided to the user based on the processed sensor data (e.g., based on a value of measured sensor data, etc.). The voice information may be generated by being synthesized, being retrieved from memory 734 (e.g., a library of record spoken segments in other data 768), or being generated from a combination thereof. It is noted that the voice audio signal may be generated based on processed sensor data from one or more sensors. DSP 730 may output the voice audio signal as processed digital audio signal 762.

In step 1408, voice is broadcast from the speaker into the ear of the user based on the received voice audio signal. For instance, as shown in FIG. 7, D/A converter 764 may be present, and may receive processed digital audio signal 762. D/A converter 764 may convert processed digital audio signal 762 to digital form to generate processed audio signal 766. Speaker 714 receives processed audio signal 766, and broadcasts voice generated based on processed audio signal 766 into the ear of the user.

In this manner, voice may be provided to the user by hearing assist device 700 based at least on sensor data, and optionally further based on additional information. The voice may provide information to the user, and may remind or instruct the user to perform a function/action. For instance, the voice may include at least one of a verbal instruction to the user (“take an iron supplement”), a verbal warning to the user (“your heart rate is high”), a verbal question to the user (“have you fallen down, and do you need assistance?”), or a verbal answer to the user (“your heart rate is 98 beats per minute”).

V. Hearing Assist Device with External Operational Support

In accordance with various embodiments, the performance of one or more functions by a hearing assist device is assisted or improved in some manner by utilizing resources of an external device and/or service to which the hearing assist device may be communicatively connected. Such performance assistance or improvement may be achieved, for example and without limitation, by utilizing power resources, processing resources, storage resources, sensor resources, and/or user interface resources of an external device or service to which the hearing assist device may be communicatively connected.

FIG. 15 is a block diagram of an example system 1500 that enables external operational support to be provided to a hearing assist device in accordance with an embodiment. As shown in FIG. 15, system 1500 includes a first hearing assist device 1501, a second hearing assist device 1503, and a portable electronic device 1505. First and second hearing assist devices 1501 and 1503 may each be implemented in a like manner to any of the hearing assist devices described above in Sections II-IV. However, first and second hearing assist devices 1501 and 303 are not limited to those implementations. Furthermore, although FIG. 15 shows two hearing assist devices that can be worn by a user, it is to be understood that the external operational support techniques described herein can also be applied to a single hearing assist device worn by a user.

Portable electronic device 1505 is intended to represent an electronic device that may be carried by or is otherwise locally accessible to a wearer of first and second hearing assist devices 1501 and 1503. By way of example and without limitation, portable electronic device 1505 may comprise a smart phone, a tablet computer, a netbook, a laptop computer, a remote control device, a personal media player, a handheld gaming device, or the like. It is noted that certain external operational support features described herein are premised on the ability of a wearer of a hearing assist device to hold portable electronic device 1505 and/or lift portable electronic device 1505 toward his/her ear. For these embodiments, it is to be understood that portable electronic device 1505 has a form factor that permits such actions to be taken. However, for embodiments that comprise other external operational support features that do not require such actions to be taken, it is to be understood that portable electronic device 1505 may have a larger form factor. For example, in accordance with certain embodiments, portable electronic device 1505 may comprise a desktop computer or television.

As further shown in FIG. 15, first hearing assist device 1501 and second hearing assist device 1503 are capable of communicating with each other via a communication link 1521. Communication link 1521 may be established using, for example and without limitation, a wired communication link, a wireless communication link (wherein such wireless communication link may be established using NFC, BLUETOOTH® low energy (BTLE) technology, wireless power transfer (WPT) technology, telecoil, or the like), or skin-based signal transmission. Furthermore, first hearing assist device 1501 is capable of communicating with portable electronic device 1505 via a communication link 1523 and second hearing assist device 303 is capable of communicating with portable electronic device 1505 via a communication link 1525. Each of communication links 1523 and 1525 may be established using, for example and without limitation, a wireless communication link (wherein such wireless communication link may be established using NFC, BTLE technology, WPT technology, telecoil or the like), or skin-based signal transmission.

As also shown in FIG. 15, portable electronic device 1505 is capable of communicating with various other entities via one or more wired and/or wireless communication pathways 1513. For example, portable electronic device 1505 may access one or more hearing assist device support services 1511 via communication pathway(s) 1513. Such hearing assist device support service(s) 1511 may be executed or otherwise provided by a device such as but not limited to a set top box, a television, a wired or wireless access point, or a server that is accessed via communication pathway(s) 1513. Such device may also comprise a gateway via which such hearing assist device support service(s) 1511 may be accessed. As will be appreciated by persons skilled in the art, such hearing assist device support service(s) 1511 may also comprise cloud-based services accessed via a network. Since portable electronic device 1505 can access such hearing assist device support service(s) 1511 and can also communicate with first and second hearing assist devices 1501 and 1503, portable electronic device 1505 is capable of making hearing assist device support service(s) 1511 available to first and second hearing assist devices 1501 and 1503.

Portable electronic device 1505 can also access one or more support personnel system(s) 1515 via communication pathway(s) 1513. Support personnel system(s) 1515 are intended to generally represent systems that are owned and/or operated by persons having an interest (personal, professional, fiduciary or otherwise) in the health, well-being, or some other state of a wearer of first and second hearing assist devices 1501 and 1503. By way of example only, support personnel system(s) 1515 may include a system owned and/or operated by a doctor's office or medical practice with which a wearer of first and second hearing assist devices 1501 and 1503 is affiliated. As another example, support personnel system(s) 1515 may include systems or devices owned and/or operated by family members, friends, or caretakers of a wearer of first and second hearing assist devices 1501 and 1503. Since portable electronic device 1505 can access such support personnel system(s) 1515 and can also communicate with first and second hearing assist devices 1501 and 1503, portable electronic device 1505 is capable of carrying out communication between first and second hearing assist devices 1501 and 1503 and support personnel system(s) 1515.

Wired and/or wireless communication pathway(s) 1513 may be implemented using any of a wide variety of communication media and associated protocols. For example, communication pathway(s) 1513 may comprise wireless communication pathways implemented via radio frequency (RF) signaling, infrared (IR) signaling, or the like. Such signaling may be carried out using long-range wireless protocols such as WIMAX® (IEEE 802.16) or GSM (Global System for Mobile Communications), medium-range wireless protocols such as WI-FI® (IEEE 802.11), and/or short-range wireless protocols such as BLUETOOTH® or any of a variety of IR-based protocols. Communication pathway(s) 1513 may also comprise wired communication pathways established over twisted pair, Ethernet cable, coaxial cable, optical fiber, or the like, using suitable communication protocols therefor.

Communication links 1523 and 1525 respectively established between first and second hearing assist devices 1501 and 1503 and portable electronic device 1505 enable first and second hearing assist devices 1501 and 1503 to utilize resources of and/or services provided by portable electronic device 1505 to assist in performing certain operations and/or improve the performance of such operations. Furthermore, since portable electronic device 1505 can access hearing assist device support service(s) 1511 and support personnel system(s) 1515, portable electronic device 1505 can also make such system(s) and service(s) available to first and second hearing assist devices 1501 and 1503 such that first and second hearing assist devices 1501 and 1503 can utilize those system(s) and service(s) to assist in the performance of certain operations and/or improve the performance of such operations.

These concepts will now be further explained with respect to FIG. 16, which depicts a system 1600 comprising a hearing assist device 1601 and a cloud/service/phone/portable device 1603 that may be communicatively connected thereto. Hearing assist device 1601 may comprise, for example and without limitation, either of hearing assist device 1501 or 1503 as described above in reference to FIG. 15 or any of the hearing assist devices described above in Sections II-IV. Although only a single hearing assist device 1601 is shown in FIG. 16, it is to be understood that system 1600 may include two hearing assist devices. Device 1603 may comprise, for example and without limitation, portable electronic device 1505 or a device used to implement any of hearing assist device support service(s) 1511 or support personnel system(s) 1515 that are accessible to portable electronic device 1505 as described above in reference to FIG. 15. Thus device 1603 may be local with respect to the wearer of hearing assist device 1601 or remote with respect to the wearer of hearing assist device 1601.

Hearing assist device 1601 includes a number of processing modules that may be implemented as software or firmware running on one or more general purpose processors and/or digital signal processors (DSPs), as dedicated circuitry, or as a combination thereof. Such processors and/or dedicated circuitry are collectively referred to in FIG. 16 as general purpose (DSP) and dedicated processing circuitry 1613. As shown in FIG. 16, the processing modules include a speech generation module 1623, a speech/noise recognition module 1625, an enhanced audio processing module 1627, a clock/scheduler module 1629, a mode select and reconfiguration module 1631, and a battery management module 1633.

As also shown in FIG. 16, hearing assist device 1601 further includes local storage 1635. Local storage 1635 comprises one or more volatile and/or non-volatile memory devices or structures that are internal to hearing assist device 1601. Such memory devices or structures may be used to store recorded audio information in an audio playback queue 1637 as well as to store information and settings 1639 associated with hearing assist device 1601, a user thereof, a device paired thereto, and to services (cloud-based or otherwise) accessed by or on behalf of hearing assist device 1601.

Hearing assist device 1601 further includes sensor components and associated circuitry 1641. Such sensor components and associated circuitry may include but are not limited to one or more microphones, bone conduction sensors, temperature sensors, blood pressure sensors, blood glucose sensors, pulse oximetry sensors, pH sensors, vibration sensors, accelerometers, gyros, magnetos, or the like. Further sensor types that may be included in hearing assist device 1601 and information regarding the structure, function and operation of such sensors is provided above in Sections II-Iv.

Hearing assist device 1601 still further includes user interface (UI) components and associated circuitry 1643. Such UI components may include buttons, switches, dials or other mechanical components by which a user may control and configure the operation of hearing assist device 1601. Such UI components may also comprise capacitive sensing components to allow for touch-based or tap-based interaction with hearing assist device 1601. Such UI components may further include a voice-based UI. Such voice-based UI may utilize speech/noise recognition module 1625 to recognize commands uttered by a user of hearing assist device 1601 and/or speech generation module 1623 to provide output in the form of pre-defined or synthesized speech. In an embodiment in which hearing assist device 1601 comprise an integrated part of a pair of glasses, visor or helmet, user interface component and associated circuitry 1643 may also comprise a display integrated with or projected upon a portion of the glasses, visor or helmet for presenting information to a user.

Hearing assist device 1601 also includes communication interfaces and associated circuitry 1645 for carrying out communication over one or more wired, wireless, or skin-based communication pathways. Communication interfaces and associated circuitry 1645 enable hearing assist device 1601 to communicate with device 1603. Communication interfaces and associated circuitry 1645 may also enable hearing assist device 1601 to communicate with a second hearing assist device worn by the same user as well as with other devices.

Generally speaking, cloud/service/phone/portable device 1603 comprises power resources, processing resources, and storage resources that can be used by hearing assist device 1601 to assist in performing certain operations and/or to improve the performance of such operations when a communication pathway has been established between the two devices.

In particular, device 1603 includes a number of assist processing modules that may be implemented as software or firmware running on one or more general purpose processors and/or DSPs, as dedicated circuitry, or as a combination thereof. Such processors and/or dedicated circuitry are collectively referred to in FIG. 16 as general/dedicated processing circuitry (with hearing assist device support) 1653. As shown in FIG. 16, the processing modules include a speech generation assist module 1655, a speech/noise recognition assist module 1657, an enhanced audio processing assist module 1659, a clock/scheduler assist module 1661, a mode select and reconfiguration assist module 1663, and a battery management assist module 1665.

As also shown in FIG. 16, device 1603 further includes storage 1667. Storage 1667 comprises one or more volatile and/or non-volatile memory devices/structures and/or storage systems that are internal to or otherwise accessible to device 1603. Such memory devices/structures and/or storage systems may be used to store recorded audio information in an audio playback queue 1669 as well as to store information and settings 1671 associated with hearing assist device 1601, a user thereof, a device paired thereto, and to services (cloud-based or otherwise) accessed by or on behalf of hearing assist device 1601.

Device 1603 also includes communication interfaces and associated circuitry 1677 for carrying out communication over one or more wired, wireless or skin-based communication pathways. Communication interfaces and associated circuitry 1677 enable device 1603 to communicate with hearing assist device 1601. Such communication may be direct (point-to-point between device 1603 and hearing assist device 1601) or indirect (through one or more intervening devices or nodes). Communication interfaces and associated circuitry 1677 may also enable device 1603 to communicate with other devices or access various remote services, including cloud-based services.

In an embodiment in which device 1603 comprises a device that is carried by or is otherwise locally accessible to a wearer of hearing assist device 1601, device 1603 may also comprise supplemental sensor components and associated circuitry 1673 and supplemental user interface components and associated circuitry 1675 that can be used by hearing assist device 1601 to assist in performing certain operations and/or to improve the performance of such operations.

Further explanation and examples of how external operational support may be provided to a hearing assist device will now be provided with continued reference to system 1600 of FIG. 16.

A prerequisite for providing external operational support to hearing assist device 1601 by device 1603 may be the establishment of a communication pathway between device 1603 and hearing assist device 1601. In one embodiment, the establishment of such a communication pathway is achieved by implementing a communication service on hearing assist device 1601 that monitors for the presence of device 1603 and selectively establishes communication therewith in accordance with a predefined protocol. Alternatively, a communication service may be implemented on device 1603 that monitors for the presence of hearing assist device 1601 and selectively establishes communication therewith in accordance with a predefined protocol. Still other methods of establishing a communication pathway between hearing assist device 1601 and device 1603 may be used.

Battery Management.

Hearing assist device 1601 includes battery management module 1633 that monitors a state of a battery internal to hearing assist device 1601. Battery management module 1601 may also be configured to alert a wearer of hearing assist device 1601 when such battery is in a low-power state so that the wearer can recharge the battery. As discussed above, the wearer of hearing assist device 1601 can cause such recharging to occur by bringing a portable electronic device within a certain distance of hearing assist device 1601 such that power may be transferred via an NFC link, WPT link, or other suitable link for transferring power between such devices. In an embodiment in which device 1603 comprises such a portable electronic device, hearing assist device 1601 may be said to be utilizing the power resources of device 1603 to assist in the performance of its operations.

As also noted above, when a communication pathway has been established between hearing assist device 1601 and device 1603, hearing assist device 1601 can also utilize other resources of device 1603 to assist in performing certain operations and/or to improve the performance of such operations. Whether and when hearing assist device 1601 so utilizes the resources of device 1603 may vary depending upon the designs of such devices and/or any user configuration of such devices.

For example, hearing assist device 1601 may be programmed to only utilize certain resources of device 1603 when the battery power available to hearing assist device 1601 has dropped below a certain level. As another example, hearing assist device 1601 may be programmed to only utilize certain resources of device 1603 when it is determined that an estimated amount of power that will be consumed in maintaining a particular communication pathway between hearing assist device 1601 and device 1603 will be less than an estimated amount of power that will be saved by offloading functionality to and/or utilizing the resources of device 1603. In accordance with such an embodiment, an assistance feature of device 1603 may be provided when a very low power communication pathway can be established or exists between hearing assist device 1601 and device 1603, but that same assistance feature of device 1603 may be disabled if the only communication pathway that can be established or exists between hearing assist device 1601 and device 1603 is one that consumes a relatively greater amount of power.

Still other decision algorithms can be used to determine whether and when hearing assist device 1601 will utilize resources of device 1603. Such algorithms may be applied by battery management module 1633 of hearing assist device 1601 and/or by battery management assist module 1665 of device 1603 prior to activating assistance features of device 1603. Furthermore, a user interface provided by hearing assist device 1601 and/or device 1603 may enable a user to select which features of hearing assist device 1601 should be able to utilize external operational support and/or under what conditions such external operational support should be provided. The settings established by the user may be stored as part of information and settings 1639 in local storage 1635 of hearing assist device 1601 and/or as part of information and settings 1671 in storage 1667 of device 1603.

In accordance with certain embodiments, hearing assist device 1601 can also utilize resources of a second hearing assist device to perform certain operations. For example, hearing assist device 1601 may communicate with a second hearing assist device worn by the same user to coordinate distribution or shared execution of particular operations. Such communication may be carried out, for example, via a point-to-point link between the two hearing assist devices or via links between the two hearing assist devices and an intermediate device, such as a portable electronic device being carried by a user. The determination of whether a particular operation should be performed by hearing assist device 1601 versus the second hearing assist device may be made by battery management module 1633, a battery management module of the second hearing assist device, or via coordination between both battery management modules.

For example, if hearing assist device 1601 has more battery power available then the second hearing assist device, hearing assist device 1601 may be selected to perform a particular operation, such as taking a blood pressure reading or the like. Such battery imbalance may result from, for example, one hearing assist device being used at a higher volume than the other over an extended period of time. Via coordination between the two hearing assist devices, a more balanced discharging of the batteries of both devices can be achieved. Furthermore, in accordance with certain embodiments, certain sensors may be present on hearing assist device 1601 that are not present on the second hearing assist device and certain sensors may be present on the second hearing assist device that are not present on hearing assist device 1601, such that a distribution of functionality between the two hearing assist devices is achieved by design.

Speech Generation.

Hearing assist device 1601 comprises a speech generation module 1623 that enables hearing assist device 1601 to generate and output verbal audio information (spoken words or the like) to a wearer thereof via a speaker of hearing assist device 1601. Such verbal audio information may be used to implement a voice UI, to provide speech-based alerts, messages and reminders as part of a clock/scheduler feature implemented by clock/schedule module 1629, or to provide emergency alerts or messages to a wearer of hearing assist device based on a detected medical condition of the wearer, or the like. The speech generated by speech generation module 1623 may be pre-recorded and/or dynamically synthesized, depending upon the implementation.

When a communication pathway has been established between hearing assist device 1601 and device 1603, speech generation assist module 1655 of device 1603 may operate to perform all or part of the speech generation function that would otherwise be performed by speech generation module 1623 of hearing assist device 1601. Such operation by device 1603 can advantageously cause the battery power of hearing assist device 1601 to be conserved. Any speech generated by speech generation assist module 1655 may be communicated back to hearing assist device 1601 for playback via at least one speaker of hearing assist device 1601. Any of a wide variety of well-known speech codecs may be used to carry out such transmission of speech information in an efficient manner. Additionally or alternatively, any speech generated by speech generation assist module 1655 can be played back via one or more speakers of device 1603 if device 1603 is local with respect to the wearer of hearing assist device 1601.

Furthermore, speech generation assist module 1655 may provide a more elaborate set of features than those provided by speech generation module 1623, as device 1603 may have access to greater power, processing and storage resources than hearing assist device 1601 to support such additional features. For example, speech generation assist module 1655 may provide a more extensive vocabulary of pre-recorded words, terms and sentences or may provide a more powerful speech synthesis engine.

Speech and Noise Recognition.

Hearing assist device 1601 includes a speech/noise recognition module 1625 that is operable to apply speech and/or noise recognition algorithms to audio input received via one or more microphones of hearing assist device 1601. Such algorithms can enable speech/noise recognition module 1625 to determine when a wearer of hearing assist device 1601 is speaking and further to recognize words that are spoken by such wearer, while rejecting non-speech utterances and noise. Such algorithms may be used, for example, to enable hearing assist device 1601 to provide a voice-based UI by which a wearer of hearing assist device 1601 can exercise voice-based control over the device.

When a communication pathway has been established between hearing assist device 1601 and device 1603, speech/noise recognition assist module 1657 of device 1603 may operate to perform all or part of the speech/noise recognition functions that would otherwise be performed by speech/noise recognition module 1625 of hearing assist device 1601. Such operation by device 1603 can advantageously cause the battery power of hearing assist device 1601 to be conserved.

Furthermore, speech/noise recognition assist module 1657 may provide a more elaborate set of features than those provided by speech/noise recognition module 1625, as device 1603 may have access to greater power, processing and storage resources than hearing assist device 1601 to support such additional features. For example, speech/noise recognition assist module 1657 may include a training program that a wearer of hearing assist device 1601 can use to train the speech recognition logic to better recognize and interpret his/her own voice. As another example, speech/noise recognition assist module 1657 may include a process by which a wearer of hearing assist device 1601 can add new words to the dictionary of words that are recognized by the speech recognition logic. Such additional features may be included in an application that can be installed by the wearer on device 1603. Such additional features may also be supported by a user interface that forms part of supplemental user interface components and associated circuitry 1675. Of course, such features may be included in speech/noise recognition module 1625 in accordance with certain embodiments.

Enhanced Audio Processing.

Hearing assist device 1601 includes an enhanced audio processing module 1627. Enhanced audio processing module 1627 may be configured to process an input audio signal received by hearing assist device 1601 to achieve a desired frequency response prior to playing back such input audio signal to a wearer of hearing assist device 1601. For example, enhanced audio processing module 1627 may selectively amplify certain frequency components of an input audio signal prior to playing back such input audio signal to the wearer. The frequency response to be achieved may specified by or derived from a prescription for the wearer that is provided to hearing assist device 1601 by an external device or system. With reference to the example components of FIG. 15, such external device or system may include any of portable electronic device 1505, hearing assist device support service(s) 1511, or support personnel system(s) 1515. In certain embodiments, such prescription may be formatted in a standardized manner in order to facilitate use thereof by any of a variety of hearing assistance devices and audio reproduction systems.

In accordance with a further embodiment in which hearing assist device 1601 is worn in conjunction with a second hearing assist device, enhanced audio processing module 1627 may modify a first input audio signal received by hearing assist device 1601 prior to playback of the first input audio signal to one ear of the wearer, while an enhanced audio processing module of the second hearing assist device modifies a second input audio signal received by the second hearing assist device prior to playback of the second input audio signal to the other ear of the wearer. Such modification of the first and second input audio signals can be used to achieve enhanced spatial signaling for the wearer. That is to say, the enhanced audio signals provided to both ears of the wearer will enable the wearer to better determine the spatial origin of sounds. Such enhancement is desirable for persons who have a poor ability to detect the spatial origin of sound, and therefore a poor ability to responds to spatial cues. To determine the appropriate modifications for the left and right ear of the wearer, an appropriate user-specific “head transfer function” can be determined through testing of a user. The results of such testing may then be used to calibrate the spatial audio enhancement function applied at each ear.

FIG. 17 is a block diagram of an enhanced audio processing module 1700 that may be utilized by hearing assist device 1601 to provide such enhanced spatial signaling. Enhanced audio processing module 1700 is configured to process an audio signal produced by a microphone of a left ear hearing assist device (denoted MIC L) and an audio signal produced by a microphone of a right ear hearing assist device (denoted MIC R) to produce an audio signal for playback to the left ear of a user (denoted LEFT).

In particular, enhanced audio processing module 1700 includes an amplifier 1702 that amplifies the MIC L signal. Such signal may also be converted from analog to digital form by an analog-to-digital (A/D) converter (not shown in FIG. 17). The output of amplifier 1702 is passed to a logic block 1704 that applies a head transfer function (HTF) thereto. The output of logic block 1704 is passed to a multiplier 1706 that applies a scaling function thereto. The output of multiplier 1706 is passed to a mixer 1720. Enhanced audio processing module 1700 also includes an amplifier 1712 that amplifies the MIC R signal. Such signal may also be converted from analog to digital form by an A/D converter (not shown in FIG. 17). The output of amplifier 1712 is passed to a logic block 1714 that applies a HTF thereto. The output of logic block 1714 is passed to a multiplier 1716 that applies a scaling function thereto. The output of multiplier 1716 is passed to mixer 1720. Mixer 1720 combines the output of multiplier 1706 and the output of multiplier 1716. The audio signal output by mixer 1720 is passed to an amplifier 1722 that amplifies it to produce the LEFT audio signal. Such signal may also be converted from digital to analog form by a digital-to-analog (D/A) converter (not shown in FIG. 17) prior to playback.

It is noted that to operate in such a manner, enhanced audio processing module 1700 must have access to both the MIC L signal obtained by the left ear hearing assist device (which is assumed to be hearing assist device 1601 in this example) and the MIC R signal obtained by the right ear hearing assist device. Thus, the left ear hearing assist device must be capable of communicating with the right ear hearing device in order to obtain the MIC R signal therefrom. Likewise, the right ear hearing assist device must be capable of communicating with the left ear hearing device in order to obtain the MIC L signal therefrom. Such communication may be carried out, for example, via a point-to-point link between the two hearing assist devices or via links between the two hearing assist devices and an intermediate device, such as a portable electronic device being carried by a user.

Thus, in accordance with the foregoing, enhanced audio processing module 1627 may modify an input audio signal received by hearing assist device 1601 to achieve a desired frequency response and/or spatial signaling prior to playback of the input audio signal. Both the desired frequency response and spatial signaling may be specified by or derived from a prescription associated with a wearing of hearing assist device 1601.

When a communication pathway has been established between hearing assist device 1601 and device 1603, enhanced audio processing assist module 1659 of device 1603 may operate to perform all or part of the enhanced audio processing functions that would otherwise be performed by enhanced audio processing module 1627 of hearing assist device 1601, provided that there is a sufficiently fast communication pathway between hearing assist device 1601 and device 1603. A sufficiently fast communication pathway is required so as not to introduce an inordinate amount of lag between the receipt and playback of audio signals by hearing assist device 1601. Such operation by device 1603 can advantageously cause the battery power of hearing assist device 1601 to be conserved.

Thus, for example, audio content collected by one or more microphones of hearing assist device 1601 may be transmitted to device 1603. Enhanced audio processing assist module 1659 of device 1603 may apply enhanced audio processing to such audio content, thereby producing enhanced audio content. The application of enhanced audio processing may comprise, but is not limited to, modifying the audio content to achieve a desired frequency response and/or spatial signaling as previously described. Device 1603 may then transmit the enhanced audio content back to hearing assist device 1601, where it may be played back to a wearer thereof. The foregoing transmission of audio content between the devices may utilize well-known audio and speech compression techniques to achieve improved transmission efficiency. Additionally or alternatively, any enhanced audio content generated by enhanced audio processing assist module 1659 can be played back via one or more speakers of device 1603 if device 1603 is local with respect to the wearer of hearing assist device 1601.

Clock/Scheduler.

A clock/scheduler module 1629 of hearing assist device 1601 is configured to provide a wearer thereof with alerts or messages concerning the date and/or time, upcoming appointments or events, or other types of information typically provided by, recorded in, or otherwise associated with a personal calendar and scheduling service or tool. Such alerts and messages may be conveyed on demand, such as in response to the wearer uttering the words “time” or “date” or performing some other action that is recognizable to a user interface associated with clock/scheduler module 1629. Such alerts and messages may also be conveyed automatically, such as in response to clock/scheduler module 1629 determining that an appointment or event is currently occurring or is scheduled to occur within a predetermined time frame. The alerts or messages may comprise certain sounds or words that are played back via one or more speakers of hearing assist device 1601. Where the alerts or messages comprise speech, such speech may be generated by speech generation module 1623 and/or speech generation assist module 1655.

As shown in FIG. 16, device 1603 includes a clock/scheduler assist module 1661. In an embodiment, clock/scheduler assist module 1661 comprises a personal calendar and scheduling service or tool that a user may interact with via a personal electronic device, such as personal electronic device 1505 of FIG. 15. For example and without limitation, the personal calendar and scheduling service or tool may comprise MICROSOFT OUTLOOK®, GOOGLE CALENDAR™, or the like. When a communication pathway has been established between hearing assist device 1601 and device 1603, information concerning the current date/time, scheduled appointments and events, and the like, may be transferred from clock/scheduler assist module 1661 to clock/scheduler module 1629. Clock/scheduler module 1629 may then store such information locally where it can be used for alert and message generation as previously described.

Clock/scheduler module 1629 within hearing assist device 1601 may be configured to store only a subset (for example, one week's worth) of scheduled appointments and events maintained by clock/scheduler assist module 1661 to conserve local storage space. Clock/scheduler module 1629 may further be configured to periodically synchronize its record of appointments and events with that maintained by clock/scheduler assist module 1661 of device 1603 when a communication pathway has been established between hearing assist device 1601 and device 1603.

When a communication pathway has been established between hearing assist device 1601 and device 1603, clock/scheduler assist module 1661 may also be utilized to perform all or a portion of the time/date reporting and alert/message generation functions that would normally be performed by clock/scheduler module 1629. Such operation by device 1603 can advantageously cause the battery power of hearing assist device 1601 to be conserved. Any alerts or messages generated by clock/scheduler assist module 1661 may be communicated back to hearing assist device 1601 for playback via at least one speaker of hearing assist device 1601. Any of a wide variety of well-known speech or audio codecs may be used to carry out such transmission of alerts and messages in an efficient manner. Additionally or alternatively, any alerts or messages generated by clock/scheduler assist module 1655 can be played back via one or more speakers of device 1603 if device 1603 is local with respect to the wearer of hearing assist device 1601.

Mode Select and Reconfiguration.

Mode select and reconfiguration module 1631 comprises a module that enables selection and reconfiguration of various operating modes of hearing assist device 1601. As will be made evident by the discussion provided below, hearing assist device 1601 may operate in a wide variety of modes, wherein each mode may specify certain operating parameters such as: (1) from which microphones audio input is to be obtained from (for example, audio input may be captured by one or more microphones of hearing assist device 1601 and/or by one or more microphones of device 1603); (2) where audio input is processed (for example, audio input may be processed by hearing assist device 1601 and/or by device 1603; (3) how audio input is processed (for example, certain audio processing features such as noise suppression, personalized frequency response processing, selective audio boosting, customized equalization, or the like may be utilized); and (4) where audio output is delivered (for example, audio output may be played back by one or more speakers of hearing assist device 1601 and/or by one or more speakers of device 1603).

The selection and reconfiguration of a particular mode of operation may be made by a user via interaction with a user interface of hearing assist device 1601. Furthermore, device 1603 includes a mode select and reconfiguration assist module 1663 that enables a user to select and reconfigure a particular mode of operation through interaction with a user interface of device 1603. Any mode selection or reconfiguration information input to device 1603 may be passed to hearing assist device 1601 when a communication pathway between the two devices is established. As will be discussed below, in certain embodiments, device 1603 may be capable of providing a more elaborate, intuitive and user-friendly user interface by which a user can select and reconfigure operational modes of hearing assist device 1601.

Mode select and reconfiguration module 1631 and/or mode select and reconfiguration assist module 1663 may each be further configured to enable a user to define contexts and circumstances in which a particular mode of operation of hearing assist device 1601 should be activated or deactivated.

Local Storage: Audio Playback Queue.

Local storage 1635 of hearing assist device 1601 includes an audio playback queue 1637. Audio playback queue 1637 is configured to store a limited amount of audio content that has been received by hearing assist device 1601 so that it can be selectively played back by a wearer thereof. This feature enables the wearer to selectively play back certain audio content (such as words spoken by another or the like). For example, the last 5 seconds of audio may be played back. Such playback may be carried out at a higher volume depending upon the configuration. Such playback may be deemed desirable, for example, if the wearer did not fully comprehend something that was just said to him/her.

Audio playback queue 1637 may comprise a first-in-first-out (FIFO) queue such that only the last few seconds or minutes of audio received by hearing assist device 1601 will be stored therein at any time. The audio signals stored in audio playback queue 1637 may comprise processed audio signals (such as audio signals that have already been processed by enhanced audio processing module 1627) or unprocessed audio signals. In the latter case, the audio signals stored in audio playback queue 1637 may be processed by enhanced audio processing module 1627 before being played back to a wearer of hearing assist device 1601. In an embodiment in which a user is wearing two hearing assist devices, a left ear queue and a right ear queue may be maintained.

When a communication pathway has been established between hearing assist device 1601 and device 1603, audio playback queue 1669 of device 1603 may also operate to perform all or part of the audio storage operation that would otherwise be performed by audio playback queue 1637 of hearing assist device 1601. Thus, audio playback queue 1669 may also support the aforementioned audio playback functionality by storing a limited amount of audio content received by hearing assist device 1601 and transmitted to device 1603. By so doing, power and storage resources of hearing assist device 1601 may be conserved. Furthermore, since device 1603 may have greater storage resources than hearing assist device 1601, audio playback queue 1669 provided by device 1603 may be capable of storing more and/or higher quality audio content than can be stored by audio playback queue 1637.

In an alternate embodiment in which device 1603 is carried by or otherwise locally accessible to a wearer of hearing assist device 1601, device 1603 may independently record ambient audio via one or more microphones thereof and store such audio in audio playback queue 1669 for later playback to the wearer. Such playback may occur via one or more speakers of hearing assist device 1601 or, alternatively, via one or more speakers of device 1603. Playback by device 1603 may be opted for, for example, in a case where hearing assist device 1601 is in a low power state, or is missing or fully discharged.

Various user interface techniques may be used to initiate playback of recorded audio in accordance with different embodiments. For example, in an embodiment, pressing a button on or tapping hearing assists device 1601 may initiate playback of a limited amount of audio. In an embodiment in which device 1603 is carried by or is otherwise locally accessible to the wearer of hearing assist device 1601, playback may be initiated by interacting with a user interface of device 1603, such as by pressing a button or tapping an icon on a touchscreen of device 1603. Furthermore, uttering certain words or sounds may trigger playback, such as “repeat” or “playback.” This feature can be implemented using the speech recognition functionality of hearing assist device 1601 or device 1603.

In certain embodiments, recording of audio may be carried out over extended period of times (for example, minutes, tens of minutes, or hours). In accordance with such embodiments, audio playback queue 1669 may be relied upon to store the recorded audio content, as device 1603 may have access to greater storage resources than hearing assist device 1601. Audio compression may be used in any of the aforementioned implementations to reduce consumption of storage.

It is noted that audio may be recorded for purposes other than playing back recently received audio. For example, recording may be used to capture the content of meetings, concerts, or other events that a wearer of hearing assist device 1601 attends so that such audio can be replayed at a later time or shared with others. Recording may also be used for health reasons. For example, a wearer's breathing noises may be recorded while the wearer is sleeping and later analyzed to determine whether or not the wearer suffers from sleep apnea. However, these are examples only, and other uses may exist for such recording functionality.

To help further illustrate the audio playback functionality, FIG. 18 depicts a flowchart 1800 of a method for providing audio playback support to a hearing assist device, such as hearing assist device 1601. As shown in FIG. 18, the method of flowchart 1800 begins at step 1802, in which an audio signal obtained via at least one microphone of the hearing assist device is received. At step 1804, a copy of the received audio signal is stored in an audio playback queue. At step 1806, the copy of the received audio signal is retrieved from the audio playback queue for playback to a wearer of the hearing assist device. In accordance with one embodiment, each of steps 1802, 1804 and 1806 is performed by a hearing assist device, such as hearing assist device 1601. In accordance with an alternate embodiment, each of steps 1802, 1804 and 1806 is performed by a device or service that is external to the hearing assist device and communicatively connected thereto via a communication pathway, such as device 1603 or a service implemented by device 1603. The method of flowchart 1800 may further include playing back the copy of the received audio signal to the wearer of the hearing assist device via at least one speaker of the hearing assist device or via at least one speaker of a portable electronic device that is carried by or otherwise accessible to the wearer of the hearing assist device.

Local Storage: Information and Settings.

Local storage 1635 also stores information and settings 1639 associated with hearing assist device 1601, a user thereof, a device paired thereto, and to services accessed by or on behalf of hearing assist device 1601. Such information and settings may include, for example, owner information (which may be used, for example, to recognize and/or authenticate an owner of hearing assist device 1601), security information (including but not limited to passwords, passcodes, encryption keys or the like) used to facilitate private and secure communication with external devices (such as device 1603), and account information useful for signing in to various services available on certain external computer systems. Such information and settings may also include personalized selections and controls relating to user-configurable aspects of the operation of hearing assist device 1601 and/or to user-configurable aspects of the operation of any device with which hearing assist device 1601 may be paired, or any services (cloud-based or otherwise) that may be accessed by or on behalf of hearing assist device 1601.

As shown in FIG. 16, storage 1667 of device 1603 also includes information and settings 1671 associated with hearing assist device 1601, a user thereof, a device paired thereto, and to services accessed by or on behalf of hearing assist device 1601. Information and settings 1671 may comprise a backup copy of information and settings 1639 stored on hearing assist device 1601. Such a backup copy may be updated periodically when hearing assist device 1601 and device 1603 are communicatively linked. Such a backup copy may be maintained on device 1603 in order to ensure that important data is not lost or otherwise rendered inaccessible if hearing assist device 1601 is lost or runs out of power. In a further embodiment, information and settings 439 stored on hearing assist device 1601 may be temporarily or permanently moved to device 1603 to free up storage space on hearing assist device 1601, in which case information and settings 1671 may comprise the only copy of such data. In a still further embodiment, information and settings 1671 stored on device 1603 may comprise a superset of information and settings 1639 stored on hearing assist device 1601. In accordance with such an embodiment, hearing assist device 1601 may selectively retrieve necessary information and settings from device 1603 on an as-needed basis and cache only a subset of such data in local storage 1635.

Sensor Components and Associated Circuitry.

As noted above, sensor components and associated circuitry 1641 of hearing assist device 1601 may include any number of sensors including but not limited to one or more microphones, bone conduction sensors, temperature sensors, blood pressure sensors, blood glucose sensors, pulse oximetry sensors, pH sensors, vibration sensors, accelerometers, gyros, magnetos, or the like. In an embodiment in which device 1603 comprises a portable electronic device that is carried by or otherwise locally accessible to a wearer of hearing assist device 1601 (such as portable electronic device 1505), sensor components and associated circuitry 1641 of device 1603 may also include all or some subset of the foregoing sensors. For example, in an embodiment, device 1603 may comprise a smart phone that includes one or more microphones, accelerometers, gyros, or magnetos.

In accordance with such an embodiment, when a communication pathway has been established between hearing assist device 1601 and device 1603, one or more of the sensors included in device 1603 may be used to perform all or a portion of the functions performed by corresponding sensor(s) in hearing assist device 1601. By utilizing such sensor(s) of device 1603, battery power of hearing assist device 1601 may be conserved.

Furthermore, data provided by the sensors included within device 1603 may be used to augment or verify information provided by the sensors within hearing assist device 1601. For example, information provided by any accelerometers, gyros or magnetos included within device 1603 may be used to provide enhanced information regarding a current body position (for example, standing up, leaning over or lying down) and/or orientation of the wearer of hearing assist device 1601. Device 1603 may also include a GPS device that can be utilized to provide enhanced location information regarding the wearer of hearing assist device 1601. Furthermore, device 1603 may include its own set of health monitoring sensors that can produce data that can be combined with data produced by health monitoring sensors of hearing assist device 1601 to provide a more accurate or complete picture of the state of health of the wearer of hearing assist device 1601.

User Interface Components and Associated Circuitry.

Depending upon the implementation, hearing assist device 1601 may have a very simple user interface or a user interface that is more elaborate. For example, in an embodiment in which hearing assist device 1601 comprises an ear bud, the user interface thereof may comprise very simple mechanical elements such as switches, buttons or dials. This may be due to the very limited surface area available for supporting such an interface. Even with a small form factor device, however, a voice-based user interface or a simple touch-based or tap-based user interface based on the use of capacitive sensing is possible. Also, head motion sensing, local or remote voice activity detection (VAD), or audio monitoring may be used to place a hearing assist device into a fully active state. In contrast, in an embodiment in which hearing interface device 1601 comprises an integrated part of a pair of glasses, a visor, or a helmet, a more elaborate user interface comprising one or more displays and other features may be possible.

In an embodiment in which device 1603 comprises a portable electronic device that is carried by or otherwise locally accessible to a wearer of hearing assist device 1601 (such as portable electronic device 1505), supplemental user interface components and associated circuitry 1675 of device 1603 may provide a means by which a user can interact with hearing assist device 1601, thereby extending the user interface of that device. For example, device 1603 may comprise a phone or tablet computer having a touch screen display that can be used to interact with and manage the features of hearing assist device 1601. For example, in accordance with such an embodiment, an application may be downloaded to or otherwise installed on device 1603 that enables a user thereof to interact with and manage the features of hearing assist device 1601 by interacting with a touch screen display or other user interface element of device 1603. This can enable a more elaborate, intuitive and user-friendly user interface to be designed for hearing assist device 1601. Such user interface may be made accessible to a user only when a communication pathway is established between device 1603 and hearing assist device 1601 so that changes to the configuration of hearing assist device 1601 can be applied to that device in real time. Alternatively, such user interface may be made accessible to user even when there is no communication pathway established between device 1603 and hearing assist device 1601. In this case, any changes made to the configuration of hearing assist device 1601 via the user interface provided by device 1603 may be stored on device 1603 and then later transmitted to hearing assist device 1601 when a suitable communication pathway becomes available.

VI. Hearing Assist Device with External Audio Quality Support

In accordance with the embodiments described above in reference to FIGS. 15 and 16, the quality of audio content received by hearing assist device may be improved by utilizing an external device or service to process such audio content when such external device or service is communicatively connected to the hearing assist device. For example, as discussed above in reference to FIG. 16, enhanced audio processing assist module 1659 of device 1603 may process audio content received from hearing assist device 1601 to achieve a desired frequency response and/or spatial signaling and then return the processed audio content to hearing assist device 1601 for playback thereby. Furthermore, any other audio processing technique that may have the effect of improving audio quality may be applied by such external device or service, including but not limited to any of a variety of noise suppression or speech intelligibility enhancement techniques, whether presently known or hereinafter developed. Whether or not such connected external device or service is utilized to perform such enhanced processing may depend on a variety of factors, including a current state of a battery of the hearing assist device, a current selected mode of operation of the hearing assist device, or the like.

In addition to processing audio content received from a hearing assist device, an external device (such as portable electronic device 1505) may forward audio content to another device to which it is communicatively connected (for example, any device used to implement hearing assist device support service(s) 1511 or support personnel system(s) 1515) so that such audio content may be processed by such other device.

In a further embodiment, the audio that is remotely processed and returned to the hearing assist device is audio that is captured by one or more microphones of an external device rather than by the microphone(s) of the hearing assist device itself. This enables the hearing assist device to avoid having to capture, package and transmit audio, thereby conserving battery power and other resources. For example, with continued reference to system 1600 of FIG. 16, in an embodiment in which device 1603 comprises a portable electronic device carried by or otherwise locally accessible to a wearer of hearing assist device 1601, one or more microphones of device 1603 may be used to capture audio content from an environment in which the wearer is located. In this case, any enhanced audio processing may be performed by device 1603 or by a device or service accessible thereto. The processed audio content may then be delivered by device 1603 to hearing assist device 1601 for playback thereby. Additionally or alternatively, such processed audio content may be played back via one or more speakers of device 1603 itself. The foregoing approach to audio processing may be deemed desirable, for example, if hearing assist device 1601 is in a very low power or even non-functioning state. In certain embodiments in which device 1603 comprises a device having more, larger and/or more sensitive microphones than those available on hearing assist device 1601, the foregoing approach to audio enhancement may actually produce higher quality audio than would be produced using only the microphone(s) of hearing assist device 1601.

The foregoing example assumes that audio content is processed for the purpose of enhancing the quality thereof. However, such audio content may also be processed for speech recognition purposes. In the case of speech recognition, the audio content may comprise one or more voice commands that are intended to initiate or provide input to a process executing outside of hearing assist device 1601. In such a case, like principles apply in that the audio content may be captured by microphone(s) of device 1603 and processed by device 1603 or by a device or service accessible thereto. However, in this case, what is returned to the wearer may comprise something other than a processed version of the original audio content captured by device 1603. For example, if the voice commands were intended to initiate an Internet search, then what is returned to the wearer may comprise the results of such a search. The search results may be presented to a display of device 1603, for example. Alternatively, if hearing assist device 1601 comprises an integrated part of a pair of glasses, visor or helmet having a display, then the search results may be presented to such display. Still further, such search results could be played back via one or more speakers of device 1603 or hearing assist device 1601 using text-to-speech conversion.

In a further embodiment, a wearer of hearing assist device 1601 may initiate operation in a mode in which audio content is captured by one or more microphone(s) of device 1603 and processed by device 1603 (or by a device or service accessible to device 1603) to achieve desired audio effects, such as custom equalization, emphasized surround sound effects, or the like. For example, in the case of surround sound, sensors included in hearing assist device 1601 and/or device 1603 may be used to determine a position of the wearer's head relative to one or more audio sources and then to modify audio content to achieve an appropriate surround sound effect given the position of the wearer's head and the location of the audio source(s). The processed audio may then be delivered to the wearer via one or more speakers of hearing assist device 1601, a second hearing assist device, and/or device 1603. To support surround sound implementations, each hearing assist device may include multiple speakers (such as piezoelectric speakers) to deliver a surround sound effect.

In an embodiment, the desired audio effects described above may be defined by a user and stored as part of a profile associated with the user and/or with a particular operational mode of a hearing assist device, wherein the operational mode may be further associated with certain contexts or conditions in which the mode should be utilized. Such profile may be formatted in a standardized manner such that it can be used by a variety of hearing assist devices and audio reproduction systems.

A wearer of hearing assist device 1601 may define and initiate any of the foregoing operational modes by interacting with a user interface of hearing assist device 1601 or a user interface of device 1603 depending upon the implementation.

The improvement of audio quality as described herein may include suppressing audio components generated by certain audio sources and/or boosting audio components generated by certain other audio sources. Such suppression or boosting may be performed by device 1603 (and/or a device or service accessible thereto), with processed audio being returned to hearing assist device 1601 for playback thereby. Additionally or alternatively, processed audio may be played back by device 1603 in scenarios in which device 1603 is local with respect to the wearer of hearing assist device 1601. In accordance with the foregoing scenarios, the original audio may be captured by one or more microphones of hearing assist device 1601, a second hearing assist device, and/or device 1603 when device 1603 is local with respect to the wearer of hearing assist device 1601.

With respect to noise suppression, the noise suppression function may utilize not only audio signal(s) captured by the microphones of the hearing assist device(s) worn by a user but also the audio signal(s) captured by the microphone(s) of a portable electronic device carried by or otherwise accessible to the user. As is known to persons skilled in the art of audio processing, by adding additional and diverse microphone reference signals, the ability of a noise suppression algorithm to identify and suppress noise can be improved.

For example, FIG. 19 is a block diagram of a noise suppression system 1900 that may be utilized by a hearing assist device or a device/service communicatively connected thereto in accordance with an embodiment. Noise suppression system 1900 is configured to process an audio signal produced by a microphone of a left ear hearing assist device (denoted MIC L), an audio signal produced by a microphone of a right ear hearing assist device (denoted MIC R), and an audio signal produced by a microphone of an external device (denoted MIC EXT) to produce a noise-suppressed audio signal for playback to the left ear of a user (denoted LEFT).

In particular, noise suppression system 1900 includes an amplifier 1902 that amplifies the MIC L signal. Such signal may also be converted from analog to digital form by an A/D converter (not shown in FIG. 19). The output of amplifier 1902 is passed to a noise suppressor 1908. Noise suppression system 1900 further includes an amplifier 1904 that amplifies the MIC R signal. Such signal may also be converted from analog to digital form by an A/D converter (not shown in FIG. 19). The output of amplifier 1904 is passed to noise suppressor 1908. Noise suppression system 1900 still further includes an amplifier 1906 that amplifies the MIC EXT signal. Such signal may also be converted from an analog to digital form by an A/D converter (not shown in FIG. 19). The output of amplifier 1908 is passed to noise suppressor. Noise suppressor 1908 applies a noise suppression algorithm that utilizes all three amplified microphone signals to generate a noise-suppressed version of the MIC L signal. The noise-suppressed audio signal generated by noise suppressor 1908 is passed to an amplifier 1910 that amplifies it to produce the LEFT audio signal. Such signal may also be converted from digital to analog form by a D/A converter (not shown in FIG. 19) prior to playback.

It is noted that to operate in such a manner, noise suppression system 1900 must have access to the MIC L signal obtained by the left ear hearing assist device, the MIC R signal obtained by the right ear hearing device, and the MIC EXT signal obtained by the external device. This can be achieved by establishing suitable communication pathways between such devices. For example, in an embodiment in which noise suppression system 1900 is implemented in a portable electronic device carried by a user, the MIC L and MIC R signals may be obtained through skin-based communication and/or BLE communication between the portable electronic device and one or both of the two hearing assist devices, while the MIC EXT signal can be obtained directly from a microphone of the portable electronic device. Still other microphone signals other than those shown in FIG. 19 may be used to improve the performance of a noise suppressor.

In further embodiments, a selection may be made between using audio input provided by the microphone(s) of the hearing assist device(s) and using audio input provided by the microphone(s) of the portable electronic device. Such selection may be made manually by the wearer of the hearing assist device(s) or may be made automatically by the hearing assist device(s) and/or the portable electronic device based on a variety of factors, including but not limited to the state of the battery of the hearing assist device(s), the quality of the audio signals being captured by each device, the environment in which the wearer is located, or the like.

In accordance with further embodiments, improving audio quality may also comprise selectively applying a boosting or amplification function to certain types of audio signals (for example, music or speech), to components of an audio signal emanating from a certain source, and/or to components of an audio signal emanating from a particular direction, while not amplifying or actively suppressing other audio signal types or components. Such processing may occur responsive to the user initiating a particular mode of operation or may occur automatically in response to detecting the existence of certain predefined conditions.

For example, in one embodiment, a user may activate a “forward only” mode in which audio signals emanating from in front of the user are boosted and signals emanating from other directions are not boosted or are actively attenuated. Such mode of operation may be desired when the user is engaging in conversation with a person that is directly in front of him/her. Additionally, such mode of operation may automatically be activated if it can be determined from sensor data obtained by the hearing assist device(s) worn by the user and/or by a portable electronic device carried by the user that the user is engaging in conversation with a person that is directly in front of him/her. In a like manner, a user may activate a “television” mode in which audio signals emanating from a television are boosted and signals emanating from other sources are not boosted or are actively attenuated. Additionally, such mode of operation may automatically be activated if it can be determined from sensor data obtained by the hearing assist device(s) worn by the user and/or by a portable electronic device carried by the user that the user is watching television.

In accordance with further embodiments, the audio processing functionality may be designed, programmed or otherwise configured such that certain sounds or noises should never be suppressed. For example, the audio processing functionality may be configured to always pass certain sounds such as extremely elevated sounds, a telephone or doorbell ringing, the honking of a car horn, an alarm or siren sounding, repeated sounds, or the like, to ensure that the wearer is made aware of important events. Likewise, the audio processing functionality may utilize speech recognition to ensure that certain uttered words are passed to the wearer, such as the wearer's name, the word “help” or other words.

In accordance with further embodiments, the types of audio that are boosted, passed or suppressed may be determined based on detecting prior and/or current activities of the user, inactivity of the user, time of day, or the like. For example, if it is determined from sensor data and from information derived therefrom that a user is sleeping, then all audio input may be suppressed with certain predefined exceptions. Likewise, certain sounds or verbal instructions may be injected at certain times, such as an alarm or morning wakeup music in the morning.

For each of the modes of operation described above, the required audio processing may be performed either by a hearing assist device, such as hearing assist device 1601, or by an external device, such as device 1603, with which hearing assist device 1601 is communicatively connected. By utilizing an external device, power, processing and storage resources of the hearing assist device may advantageously be conserved.

The foregoing describes the use of an external device or service to provide improved audio quality to a hearing assist device. It should be noted, however, that in a scenario in which a user is wearing two hearing assist devices that are capable of communicating with each other, one hearing assist device may be selected to perform any of the audio processing tasks described herein on behalf of the other. Such selection may be by design in that one hearing assist device is equipped with more audio processing capabilities than the other. Alternatively, such selection may be performed dynamically based on a variety of factors including the comparative battery levels of each hearing assist device, a processing load currently assigned to each hearing assist device, or the like. Any audio that is processed by a first hearing assist device on behalf of a second hearing assist device may originate from one or more microphones of the first hearing assist device, from one or more microphones of the second hearing assist device, or from one or more microphones of portable electronic device that is carried by or otherwise locally accessible to a wearer of the first and second hearing assist devices.

To help further illustrate the foregoing concepts, FIG. 20 depicts a flowchart 2000 of a method for providing external operational support to a hearing assist device worn by a user, such as hearing assist device 1601. As shown in FIG. 20, the method of flowchart 2000 begins at step 2002 in which a communication pathway is established to the hearing assist device. At step 2004, an audio signal obtained by the hearing assist device is received via the communication pathway. At step 2006, the audio signal is processed to obtain processing results. At step 2008, the processing results are transmitted to the hearing assist device via the communication pathway.

Depending upon the implementation, each of the establishing, receiving, processing and transmitting steps may be performed by one of a second hearing assist device worn by the user, a portable electronic device carried by or otherwise accessible to the user, or a device or service that is capable of communicating with the hearing assist device via a portable electronic device carried by or otherwise accessible to the user. As noted above, device 1603 may represent both a portable electronic device carried by or otherwise accessible to the user, or a device or service that is capable of communicating with the hearing assist device via a portable electronic device carried by or otherwise accessible to the user.

In accordance with certain embodiments, step 2002 of flowchart 2000 may comprise establishing the communication pathway to the hearing assist device comprises establishing a communication link with the hearing assist device using one of NFC, BTLE technology, WPT technology, telecoil, or skin-based communication technology.

In one embodiment, step 2006 of flowchart 2000 comprises processing the audio signal to generate an enhanced audio signal having a desired frequency response associated with the user and step 2008 comprises transmitting the enhanced audio signal to the hearing assist device via the communication pathway for playback thereby.

In another embodiment, step 2006 of flowchart 2000 comprises processing the audio signal to generate an enhanced audio signal having a desired spatial signaling characteristic associated with the user and step 2008 comprises transmitting the enhanced audio signal to the hearing assist device via the communication pathway for playback thereby.

In a further embodiment, step 2006 of flowchart 2000 comprises applying noise suppression to the audio signal to generate a noise-suppressed audio signal and step 2008 comprises transmitting the noise-suppressed audio signal to the hearing assist device via the communication pathway for playback thereby. In further accordance with such an embodiment, applying noise suppression to the audio signal may comprise processing the audio signal and at least one additional audio signal obtained by a portable electronic device carried by or otherwise accessible to the user.

In a still further embodiment, step 2006 of flowchart 2000 comprises applying speech recognition to the audio signal to identify one or more recognized words.

FIG. 21 depicts a flowchart 2100 that illustrates steps that may be performed in addition to those shown in flowchart 2000 to provide external operational support to a hearing assist device worn by a user, such as hearing assist device 1601. As shown in FIG. 21, the first additional step is step 2102, which comprises receiving a second audio signal obtained by a portable electronic device that is carried by or otherwise accessible to the user. At step 2104, the second audio signal is processed to obtain processing results. At step 2106, the processing results are transmitted to the portable electronic device. These additional steps encompass the scenario wherein at least the audio capturing and playback tasks are offloaded from the hearing assist device to the portable electronic device.

FIG. 22 depicts a flowchart 2200 that illustrates steps that may be performed in addition to those shown in flowchart 2000 to provide external operational support to a hearing assist device worn by a user, such as hearing assist device 1601. As shown in FIG. 22, the first additional step is step 2202, which comprises receiving a second audio signal obtained by a portable electronic device that is carried by or otherwise accessible to the user. At step 2204, the second audio signal is processed to obtain processing results. At step 2206, the processing results are transmitted to the hearing assist device. These additional steps encompass the scenario wherein audio capturing tasks are offloaded from the hearing assist device to the portable electronic device, but the audio playback task is retained by the hearing assist device.

FIG. 23 depicts a flowchart 2300 that illustrates steps that may be performed in addition to those shown in flowchart 2000 to provide external operational support to a hearing assist device worn by a user, such as hearing assist device 1601. As shown in FIG. 23, the first additional step is step 2302, which comprises receiving a second audio signal obtained by the hearing assist device. At step 2304, the second audio signal is processed to obtain processing results. At step 2306, the processing results are transmitted to a portable electronic device that is carried by or otherwise accessible to the user. These additional steps encompass the scenario wherein audio capturing tasks are retained by the hearing assist device while audio playback tasks are offloaded to the portable electronic device.

VII. Hearing Assist Device with Active Audio Filtering Supporting Substitute Audio Input

In accordance with further embodiments, an audio signal received by one or more microphones of a hearing assist device may be suppressed or blocked while a substitute audio input signal may be delivered to the wearer. For example, a language translation feature may be implemented in which an audio signal received by one or more microphones of a hearing assist device is transmitted to an external device or service. The external device or service applies a combination of speech recognition and translation thereto to synthesize a substitute audio signal. The substitute audio signal comprises a translated version of the speech included in the original audio signal. The substitute audio signal is then transmitted back to the hearing assist device for playback thereby. While this is occurring, the hearing assist device utilizes active filtering to suppress the original audio signal or blocks it entirely, so that the wearer can clearly hear the substitute audio signal being played back through a speaker of the hearing assist device.

As another example, an audio signal generated by a television, a DVD player, a compact disc (CD) player, a set top box, a portable media player, a handheld gaming device, or other entertainment device may be routed to a hearing assist device worn by a user for playback thereby. Such entertainment devices may also include smart phones, tablet computers, and other computing devices capable of running entertainment applications. While the user is listening to the audio being generated by the entertainment device, the hearing assist device may operate to suppress ambient background noise using an active filtering function, thereby providing the user with an improved listening experience. The delivery of the audio signal from the entertainment device to the hearing assist device and suppression of ambient background noise may occur in response to the establishment of a communication link between the hearing assist device and the entertainment device, or in response to other detectable factors, such as the hearing assist device being within a certain range of the entertainment device or the like. Conversely, the delivery of the audio signal from the entertainment device to the hearing assist device and suppression of ambient background noise may be discontinued in response to the breaking of a communication link between the hearing assist device and the entertainment device, or in response to other detectable factors, such as the hearing assist device passing outside of a certain range of the entertainment device or the like.

For safety reasons as well as certain practical reasons, there may be certain sounds or noises should never be suppressed. Accordingly, the functionality described above for suppressing ambient audio in favor of a substitute audio stream could be configured to always pass certain sounds such as extremely elevated sounds, a telephone or doorbell ringing, the honking of a car horn, an alarm or siren sounding, repeated sounds, or the like, to ensure that the wearer is made aware of important events. Likewise, such functionality may utilize speech recognition to ensure that certain uttered words are always passed to the wearer, such as the wearer's name, the word “help” or other words. The functionality that monitors for such sounds and words may be present in the hearing assist device or in a portable electronic device that is communicatively connected thereto. When such sounds and words are passed to the hearing assist device, the substitute audio stream may be paused or discontinued (for example, a song the wearer was listening to may be paused or discontinued or a movie the wearer was viewing may be paused or discontinued). Furthermore, when such sounds and words are passed to the hearing assist device, the suppression of ambient noise may also be discontinued.

Generally speaking, a hearing assist device in accordance with an embodiment can receive any number of audio signals and selectively pass one or a mixture of some or all of the audio signals for playback to a wearer thereof. Additionally, a hearing assist device in accordance with such an embodiment can selectively amplify or suppress any one of the aforementioned audio signals. This is illustrated by the block diagram of FIG. 24, which shows an audio processing module 2400 that may be implemented in a hearing assist device in accordance with an embodiment.

As shown in FIG. 24, audio processing module 2400 is capable of receiving at least four different audio signals. These include an audio signal captured by a microphone of the hearing assist device (denoted MIC), an audio signal received via an NFC interface of the hearing assist device (denoted NFC), an audio signal received via a BLE interface of the hearing assist device (denoted BLE), and an audio signal received via a skin-based communication interface of the hearing assist device (denoted SKIN). Audio processing module 2400 is configured to process these audio signals to generate an output audio signal for playback via a speaker 2412.

As shown in FIG. 24, each of the MIC, NFC, BLE and SKIN signals is amplified by a corresponding amplifier 2402, 2412, 2422 and 2432. Each of these signals may also be converted from analog to digital form by a corresponding A/D converter (not shown in FIG. 24). The amplified signals are then passed to a corresponding multiplier 2404, 2414, 2424 and 2434, each of which applies a certain scaling function thereto, wherein such scaling function can be used to determine a relative degree to which each signal will contribute to a final output signal. Furthermore, switches 2406, 2416, 2426 and 2436 can be used to selectively remove the output of any of multipliers 2404, 2414, 2424, and 2434 from the final output signal. Any signals passed through switches 2406, 2416, 2426 and 2436 are received by a mixer 408 which combines such signals to produce a combined audio signal. The combined audio signal is then passed to an amplifier 2410 which amplifies it to produce the output audio signal that will be played back by speaker 2412. The output audio signal may also be converted from a digital to analog form by a D/A converter (not shown in FIG. 24) prior to playback.

In further embodiments, audio processing module 2400 may include additional logic that can apply active filtering, noise suppression, speech intelligibility enhancement, or any of a variety of audio signal processing functions to any of the audio signals received by the hearing assist device. Such functionality can be used to emphasize certain sounds, for example. Additionally, audio processing module 2400 may also include an output path by which the MIC signal can be passed to an external device for remote processing thereof. Such remotely-processed signal may then be returned via any of the NFC, BLE or skin-based communication interfaces discussed above.

FIG. 24 thus illustrates that different audio streams may be picked up by the same hearing assist device. Whether one audio stream is exposed or not may depend on the circumstances, which can change from time to time. Consequently, each audio stream is delivered or filtered in varying dB intensities with prescribed equalization as managed by the hearing assist device or any one or more of the devices or services to which the hearing assist device may be communicatively connected.

VIII. Hearing Assist System with a Backup Hearing Assist Terminal

In an embodiment, a portable electronic device (such as portable electronic device 1505) carried or otherwise locally accessible to a wearer of hearing assist device (such as hearing assist device 1501 or 1503) is configured to detect when the hearing assist device is missing from the wearer's ear or discharged. In such a scenario, the portable electronic device responds by entering a hearing assist mode in which it captures ambient audio and processes it in accordance with a prescription associated with the wearer. As discussed above, such prescription may specify, for example, a desired frequency response or other desired characteristics of audio to be played back to the wearer. Such hearing assist mode may also be manually triggered by the wearer through interaction with a user interface of the portable electronic device. In an embodiment in which the portable electronic device comprises a telephone, the foregoing hearing assist mode may also be used to equalize and amplify incoming telephone audio as well. The functionality of the hearing assist mode may be included in an application that can be downloaded or otherwise installed on the portable electronic device.

In accordance with certain embodiments, the activation and use of the hearing assist mode of the portable electronic device may be carried out in a way that is not immediately discernible to others who may be observing the user. For example, in an embodiment in which the portable electronic device comprises a telephone, the telephone may be programmed to enter the hearing assist mode when the user raises the telephone to his/her ear and utters a particular activation word or words. Such a feature enables a user to make it look as if he or she is simply using his/her phone.

In an embodiment, the portable electronic device may be configured to use one or more sensors (for example, a camera and/or microphone) to determine who the current user of the portable electronic device and to automatically select the appropriate prescription for that user when entering hearing assist mode. Alternatively, the user may interact with a user interface of the portable electronic device to select an appropriate volume level and prescription.

In accordance with further embodiments, the hearing assist device may be capable of issuing a warning message to the wearer thereof when it appears that the battery level of the hearing assist device is low. In response to receiving such warning message, the wearer may utilize the portable electronic device to perform a recharging operation by bringing the portable electronic device within a range of the hearing assist device that is suitable for wirelessly transferring power thereto as was previously described. Additionally or alternatively, the wearer may activate a mode of operation in which certain operations normally performed by the hearing assist device are performed instead by the portable electronic device or by a device or service that is communicatively connected to the portable electronic device.

IX. Hearing Assist Device Configuration Using Hand-Held Terminal Support

In an embodiment, a personal electronic device (such as personal electronic device 1505) may be used to perform a hearing test on a wearer of hearing assist device (such as hearing assist device 1501 or 1503). The hearing test may involve causing the hearing assist device to play back sounds having certain frequencies at certain volumes and soliciting feedback from the wearer regarding whether such sounds were heard or not. Still other types of hearing tests may be performed. For example, a hearing test designed to determine a head transfer function useful in achieving desired spatial signaling for a particular user may also be administered. The test results may be analyzed to generate a personalized prescription for the wearer. Sensors within the hearing assist device may be used to measure distance to the ear drum or other factors that may influence test results so that such factors can be accounted for in the analysis. The personalized prescription may then be downloaded or otherwise transmitted to the hearing assist device for implementation thereby. Such personalized prescription may be formatted in a standardized manner such that it may be used by a variety of hearing assist devices or audio reproduction systems. Sensors within the hearing assist device may be used to measure distance to the ear drum or other factors that may impact test results and the analysis thereof.

In certain embodiments, the test results are processed locally by the portable electronic device to generate a prescription. In alternate embodiments, the test results are transmitted from the portable electronic device to a remote system for automated analysis and/or analysis by a clinician or other qualified party and a prescription is generated via such remote analysis.

X. Hearing Assist Device Types

The hearing assist devices described herein may comprise devices such as those shown in FIGS. 2-6 and 15. However, it is noted that the hearing assist devices described herein may comprise a part of any structure or article that may cover an ear of a user or that may be proximally located to an ear of a user. For example, the hearing assist devices described herein may comprise a part of a headset, a pair of glasses, a visor, or a helmet worn by a user or may be designed to be connected or tethered to such headset, pair of glasses, visor, or helmet.

XI. Conclusion

While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be understood by those skilled in the relevant art(s) that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims

1. A method for providing external operational support to a hearing assist device worn by a user, comprising:

establishing a communication pathway to the hearing assist device;
receiving first signals obtained by the hearing assist device via the communication pathway;
processing the first signals; and
sending a first response to the hearing assist device via the communication pathway.

2. The method of claim 1, wherein the establishing, receiving, processing and sending are performed by one of:

a second hearing assist device worn by the user;
a portable electronic device carried by or otherwise accessible to the user; or
a device or service that is capable of communicating with the hearing assist device via a portable electronic device carried by or otherwise accessible to the user.

3. The method of claim 1, wherein establishing the communication pathway to the hearing assist device comprises establishing a communication link with the hearing assist device using one of near field communication (NFC), BLUETOOTH® low energy (BLE) technology, wireless power transfer (WPT) technology, telecoil, or skin-based communication technology.

4. The method of claim 1, wherein receiving the first signals comprising receiving an audio signal.

5. The method of claim 4, wherein processing the first signals comprises processing the audio signal to generate an enhanced audio signal having a desired frequency response associated with the user; and

wherein sending the first response to the hearing assist device via the communication pathway comprises sending the enhanced audio signal to the hearing assist device via the communication pathway for playback thereby.

6. The method of claim 4, wherein processing the first signals comprises processing the audio signal to generate an enhanced audio signal having a desired spatial signaling characteristic associated with the user; and

wherein sending the first response comprises sending the enhanced audio signal to the hearing assist device via the communication pathway for playback thereby.

7. The method of claim 4, wherein processing the first signals comprises applying noise suppression to the audio signal to generate a noise-suppressed audio signal; and

wherein sending the first response to the hearing assist device via the communication pathway comprises sending the noise-suppressed audio signal to the hearing assist device via the communication pathway for playback thereby.

8. The method of claim 6, wherein applying noise suppression to the audio signal comprises processing the audio signal and at least one additional audio signal obtained by a portable electronic device carried by or otherwise accessible to the user.

9. The method of claim 4, wherein processing the first signals comprises applying speech recognition to the audio signal to identify one or more recognized words.

10. The method of claim 1, wherein receiving the first signals comprises receiving sensor data.

11. The method of claim 10, wherein sending the first response to the hearing assist device via the communication pathway comprises performing at least one of:

sending a command to be executed by the hearing assist device; and
sending audio content for playback to the user.

12. A device that provides external operational support to a hearing assist device, comprising:

a communication interface; and
processing circuitry that is operable to receive first signals from the hearing assist device via the communication interface, process the first signals, and send a first response to the hearing assist device via the communication interface.

13. The device of claim 12, wherein the communication interface comprises one of a near field communication (NFC) interface, a BLUETOOTH® Low Energy interface, a wireless power transfer (WPT) interface, a skin-based communication interface, or a telecoil.

14. The device of claim 12, wherein the first signals comprise an audio signal.

15. The device of claim 12, wherein the first signals comprise sensor data.

16. The device of claim 12, wherein the first response comprises at least one of a command to be executed by the hearing assist device and audio content to be played back to a wearer of the hearing assist device.

17. A method for providing audio playback functionality for a hearing assist device, comprising:

receiving an audio signal obtained via at least one microphone of the hearing assist device;
storing a copy of the received audio signal in an audio playback queue; and
retrieving the copy of the received audio signal from the audio playback queue for playback to a wearer of the hearing assist device.

18. The method of claim 17, wherein the receiving, storing, and retrieving steps are performed by the hearing assist device.

19. The method of claim 17, wherein the receiving, storing, and retrieving steps are performed by a device or service that is external to the hearing assist device and communicatively connected thereto via a communication pathway.

20. The method of claim 17, further comprising playing back the copy of the received audio signal to the wearer of the hearing assist device via at least one speaker of the hearing assist device or via at least one speaker of a portable electronic device that is carried by or otherwise accessible to the wearer of the hearing assist device.

Patent History
Publication number: 20130343584
Type: Application
Filed: Sep 20, 2012
Publication Date: Dec 26, 2013
Applicant: BROADCOM CORPORATION (Irvine, CA)
Inventors: James D. Bennett (Hroznetin), John Walley (Ladera Ranch, CA)
Application Number: 13/623,435
Classifications
Current U.S. Class: Remote Control, Wireless, Or Alarm (381/315); Hearing Aids, Electrical (381/312); Noise Compensation Circuit (381/317)
International Classification: H04R 25/00 (20060101);