Gesture Detection
Methods, systems, and products sense contactless gestures. A capacitive sensor measures capacitance during performance of a gesture. The capacitive sensor generates an output signal that is compared to a database. The database stores different output signals that are associated to different commands. The corresponding command is executed in response to the performance of the gesture.
Latest AT&T Patents:
- CONGESTION-AWARE TRAFFIC MANAGEMENT USING HISTORICAL LOAD DATA AND REAL-TIME CELL MAPPING
- METHOD AND APPARATUS FOR EXTENDING WIRELESS COVERAGE WITH ONE OR MORE AUTONOMOUS DEVICES
- APPARATUSES AND METHODS FOR FACILITATING AN ADAPTIVE, APPLICATION AND SERVICE-AWARE HARQ
- SYSTEM AND METHOD FOR NEGOTIATION AND PERMANENCE MANAGEMENT OF METAVERSE MASHUPS
- SYSTEM AND METHOD OF SECURING ALLOCATION OF NETWORK FUNCTIONS FOR SESSION SLICES
This application is a continuation of U.S. patent application Ser. No. 16/010,855, filed Jun. 18, 2018, which is a continuation of U.S. patent application Ser. No. 14/078,982, filed Nov. 13, 2013 (now U.S. Pat. No. 10,025,431). All sections of the aforementioned application(s) and/or patent(s) are incorporated herein by reference in their entirety.
COPYRIGHT NOTIFICATIONA portion of the disclosure of this patent document and its attachments contain material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyrights whatsoever.
BACKGROUNDGesture detection is common. Many set-top boxes, remote controls, and mobile devices may be controlled using physical gestures. Gestures may even be used to control an automotive environment, such as power windows. In conventional gesture control, a user places her finger on a gesture surface and performs some gesture.
The features, aspects, and advantages of the exemplary embodiments are better understood when the following Detailed Description is read with reference to the accompanying drawings, wherein:
The exemplary embodiments will now be described more fully hereinafter with reference to the accompanying drawings. The exemplary embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. These embodiments are provided so that this disclosure will be thorough and complete and will fully convey the exemplary embodiments to those of ordinary skill in the art. Moreover, all statements herein reciting embodiments, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future (i.e., any elements developed that perform the same function, regardless of structure).
Thus, for example, it will be appreciated by those of ordinary skill in the art that the diagrams, schematics, illustrations, and the like represent conceptual views or processes illustrating the exemplary embodiments. The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing associated software. Those of ordinary skill in the art further understand that the exemplary hardware, software, processes, methods, and/or operating systems described herein are for illustrative purposes and, thus, are not intended to be limited to any particular named manufacturer.
As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless expressly stated otherwise. It will be further understood that the terms “includes,” “comprises,” “including,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being “connected” or “coupled” to another element, it can be directly connected or coupled to the other element or intervening elements may be present. Furthermore, “connected” or “coupled” as used herein may include wirelessly connected or coupled. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
It will also be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first device could be termed a second device, and, similarly, a second device could be termed a first device without departing from the teachings of the disclosure.
Exemplary embodiments thus greatly improve gesture detection. Conventional gesture detection utilizes infrared vision systems and/or environmental markers (such as motion capture suits). Infrared detection, though, is poor in bright environments, where ambient light typically washes out the infrared spectrum. Indeed, automotive interiors often have large solar glass expanses that make infrared detection infeasible. Exemplary embodiments, instead, detect gestures using the capacitance 30. The gesture detector 24 thus does not rely on the infrared spectrum, so the gesture detector 24 recognizes gestures even in external environments where current sensor technologies fail. The gesture detector 24 may thus be dispersed throughout the automotive interior 20 for detection and interpretation of driver and passenger gestures.
Exemplary embodiments thus greatly increase safety. Conventional automotive interiors have knobs, buttons, and stalks that must be physically manipulated to control a vehicle. Exemplary embodiments, instead, recognize gesture inputs that do not require physical contact with automotive controls. The driver's hand and/or fingers may make movements without removing the driver's eye from the road. Exemplary embodiments recognize the gesture 28 and safely execute the corresponding command 36. The gesture detector 24 recognizes simple snaps and swipes, more complex geometric shapes, and even alphanumeric characters. Whatever the gesture 28, exemplary embodiments allow safe and complete control of the automotive environment.
The gesture 28 may be touch less. Conventional gesture detectors require contact between the hand 26 and some gesture surface. Indeed, many vehicles have conventional touch screens that allow the driver's fingers to scroll or swipe among selections of items and tap to select.
The processor 42 consults a database 50 of gestures. When the output signal 32 is received, the processor 42 queries the database 50 of gestures.
Exemplary embodiments may thus be deployed throughout homes and businesses. The gesture detector 24 may be installed within cars where ambient, dynamic lighting conditions degrade conventional optical recognition techniques. The gesture detector 24, however, may also be installed in communications devices, toys, fixtures, and any other electronic device 70. Because the gesture detector 24 does not rely on light, the gesture detector 24 is thus unaffected by lighting conditions. The gesture detector 24 may thus be deployed throughout homes and businesses to detect and interpret our gestures. The gesture detector 24 may even be combined with or augmented by voice recognition techniques to reduce, or even eliminate, manual activation of controls.
where Q is the charge and ϵ is the permittivity of the air between the user's hand 26 and the plate 90. Knowing the relationship for the capacitance C as
the capacitance C may be rewritten as
The reader may notice that the capacitance C (illustrated as reference numeral 30) has no dependence on the voltage difference V, nor is the capacitance C dependent on the electrical charge Q (illustrated as reference numeral 96). The reader may also notice that the capacitance C is inversely proportional to the separation distance d. As the user's hand 26 approaches the plate 90, the separation distance d decreases, causing the capacitance C to increase. Conversely, as the user's hand 26 moves away from the plate 90, the separation distance d increases, causing the capacitance C to decrease.
The output signal 32 also changes. As the user's hand 26 vertically moves with respect to the plate 90, the capacitance C changes. Once the electrical charges 96 develop, the electric field E (illustrated as reference numeral 100 in
V(t)=Vo(e−t/τ).
Because the capacitance C changes as the user's hand 26 performs the gesture, the time constant τ=RC will also change, causing the output signal 32 to change with the same gesture. So, as the user's hand 26 performs the gesture 28, the capacitance C changes and the output signal 32 also changes. If the output signal 32 is analog, the output signal 32 may be converted by the analog-to-digital converter 40 before being interpreted by the processor 42. The processor 42 receives the output signal 32, queries the database 50 of gestures, and executes the corresponding command 36, as earlier paragraphs explained.
Baseline comparisons may then be made. As the user performs the gesture, exemplary embodiments may compare the baseline capacitance CBase to the output signal 32. That is, exemplary embodiments may compare the output signal 32 to the baseline measurements of the ambient environment. Any change may then be used to retrieve the corresponding command 36.
The database 50 of gestures may also be prepopulated. As the gesture detector 24 may be adapted to any electronic device or environment, a manufacturer or retailer may preload the database 50 of gestures. Gestures may be predefined to invoke or call commands, functions, or any other action. The user may then learn the predefined gestures, such as by viewing training tutorials. The user may also download entries or updates to the database 50 of gestures. A server, accessible from the Internet, may store predefined associations that are downloaded and stored to the memory 44.
Exemplary embodiments may also be applied to jewelry and other adornment. As wearable devices become common, jewelry will evolve as a computing platform. An article of jewelry, for example, may be instrumented with the gesture detector 24, thus enabling inputs across a surface of the jewelry. Moreover, as the gesture detector 24 may be small and adhesively adhered, exemplary embodiments may be applied or retrofitted to heirloom pieces and other existing jewelry, thus transforming older adornment to modern, digital usage.
Exemplary embodiments may be physically embodied on or in a computer-readable storage medium. This computer-readable medium may include CD-ROM, DVD, tape, cassette, floppy disk, memory card, and large-capacity disks. This computer-readable medium, or media, could be distributed to end-subscribers, licensees, and assignees. These types of computer-readable media, and other types not mention here but considered within the scope of the exemplary embodiments. A computer program product comprises processor-executable instructions for detecting gestures, as explained above.
While the exemplary embodiments have been described with respect to various features, aspects, and embodiments, those skilled and unskilled in the art will recognize the exemplary embodiments are not so limited. Other variations, modifications, and alternative embodiments may be made without departing from the spirit and scope of the exemplary embodiments.
Claims
1. A method comprising:
- receiving, by a gesture detector, first volumetric data comprising a first plurality of output signals generated by a three-dimensional curvilinear capacitive sensor during a first performance of a contactless gesture;
- sampling, by a processing system comprising a processor, the first plurality of output signals according to a sampling rate, resulting in a first plurality of discrete data points;
- storing, by the processing system, the first plurality of discrete data points in a database;
- displaying, by the processing system, a menu, the menu listing a plurality of different commands;
- receiving, by the processing system, via the menu, a selection of a first command of the plurality of different commands;
- based on the receiving of the selection, associating, by the processing system, the first command to the contactless gesture;
- receiving, by the gesture detector, second volumetric data comprising a second plurality of output signals generated by the three-dimensional curvilinear capacitive sensor during a second performance of the contactless gesture;
- sampling, by the processing system, the second plurality of output signals according to the sampling rate, resulting in a second plurality of discrete data points;
- comparing, by the processing system, the second plurality of discrete data points to the first plurality of discrete data points stored in the database to identify the first command of the plurality of different commands in connection with the second performance; and
- based on the comparing, executing, by the processing system, the first command of the plurality of different commands.
2. The method of claim 1, wherein the gesture detector is integrated with the processing system.
3. The method of claim 1, wherein:
- the first performance of the contactless gesture is with a user's hand; and
- the second performance of the contactless gesture is with the user's hand.
4. The method of claim 3, wherein:
- the three-dimensional curvilinear capacitive sensor comprises a plurality of curvilinearly arranged plates; and
- the plurality of curvilinearly arranged plates are disposed upon a surface of a substrate and are located between the substrate and the user's hand.
5. The method of claim 1, wherein the three-dimensional curvilinear capacitive sensor comprises a plurality of curvilinearly arranged plates for generating an electric field by the curvilinearly arranged plates.
6. The method of claim 5, wherein adjacent ones of the plurality of the curvilinearly arranged plates have a different orientation as to each other.
7. The method of claim 6, wherein the different orientation between the adjacent ones of the plurality of the curvilinearly arranged plates includes at least a height difference.
8. The method of claim 5, wherein each curvilinearly arranged plate produces one particular output signal of the first plurality of output signals.
9. The method of claim 8, wherein each output signal of the first plurality of output signals is different from one another.
10. The method of claim 1, wherein the processing system is in a vehicle.
11. The method of claim 1, wherein the processing system is one of an appliance, an audio-video component, an electronic device, an on-board diagnostic system, and an HVAC system.
12. The method of claim 1, further comprising sending the first command over a communications connection shared by the gesture detector and the processing system.
13. A device comprising:
- a processor; and
- memory storing instructions that when executed cause the processor to perform operations, the operations comprising: receiving first volumetric data comprising a first plurality of output signals generated by a three-dimensional curvilinear capacitive sensor during a first performance of a contactless gesture; sampling the first plurality of output signals, resulting in a first plurality of discrete data points; storing the first plurality of discrete data points in a database; processing a menu for display, the menu listing a plurality of different commands; receiving, via the menu, a selection of a first command of the plurality of different commands; associating, based on the receiving of the selection, the first command to the contactless gesture; receiving second volumetric data comprising a second plurality of output signals generated by the three-dimensional curvilinear capacitive sensor during a second performance of the contactless gesture; sampling the second plurality of output signals, resulting in a second plurality of discrete data points; comparing the second plurality of discrete data points to the first plurality of discrete data points stored in the database to identify the first command of the plurality of different commands in connection with the second performance; and facilitating execution, based on the comparing, of the first command.
14. The device of claim 13, wherein the device comprises a gesture detector.
15. The device of claim 14, wherein the facilitating the execution of the first command comprises facilitating execution of the first command on a processing system.
16. The device of claim 15, wherein the gesture detector is integrated with the processing system.
17. The device of claim 15, further comprising a shared communications connection between the gesture detector and the processing system for sending the first command for execution by the processing system.
18. A non-transitory computer-readable medium storing computer program instructions for gesture detection, the computer program instructions, when executed by a processor, cause the processor to perform operations comprising:
- receiving first volumetric data comprising a first plurality of output signals generated by a three-dimensional curvilinear capacitive sensor during a first performance of a contactless gesture;
- sampling the first plurality of output signals according to a sampling rate, resulting in a first plurality of data points;
- processing a menu for display, the menu listing a plurality of different commands;
- receiving, via the menu, a selection of a first command of the plurality of different commands;
- associating, based on the receiving of the selection, the first command to the contactless gesture;
- receiving, second volumetric data comprising a second plurality of output signals generated by the three-dimensional curvilinear capacitive sensor during a second performance of the contactless gesture;
- sampling the second plurality of output signals according to the sampling rate, resulting in a second plurality of data points;
- comparing the second plurality of data points to the first plurality of data points to identify the first command of the plurality of different commands in connection with the second performance; and
- facilitating execution, based on the comparing, of the first command of the plurality of different commands.
19. The non-transitory computer-readable medium of claim 18, wherein the first command is executed by a processing system.
20. The non-transitory computer-readable medium of claim 19, wherein:
- the processor is part of a gesture detector;
- the first command is executed by a processing system;
- the operations further comprise sending the first command over a communications connection shared by the processor and the processing system, for execution by the processing system;
- the second performance is approximately one second in duration; and
- the sampling rate is approximately two-tenths of a second.
Type: Application
Filed: Jun 3, 2022
Publication Date: Sep 22, 2022
Applicant: AT&T Intellectual Property I, L.P. (Atlanta, GA)
Inventor: Kevin Li (Chatham, NJ)
Application Number: 17/832,147