GESTURE RECOGNITION WITH PRINCIPAL COMPONENT ANAYSIS

- INTERSIL AMERICAS INC.

A system and method for identifying a position of a moving object, utilizing sensors arranged in any arbitrary configuration, is provided. A pre-processing method is applied to permit implementation on a low power computing device, such as a microcontroller. The preprocessing creates a set of training data corresponding to different gestures, based on different positions of moving objects. Accordingly, utilizing the training data, different types of gestures can be classified by comparing a sensed signal to the set of training gestures.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims priority to U.S. Provisional Patent Application Ser. No. 61/298,895, filed on Jan. 27, 2010, and entitled “ARCHITECTURE FOR A REFLECTION BASED LONG RANGE PROXIMITY AND MOTION DETECTOR HAVING AN INTEGRATED AMBIENT LIGHT SENSOR,” the entirety of which is incorporated by reference herein. Further, this application is related to co-pending U.S. patent application Ser. No. 12/979,726, filed on Dec. 28, 2010 (Attorney docket number SE-2773/INTEP105USA), entitled “DISTANCE SENSING BY IQ DOMAIN DIFFERENTIATION OF TIME OF FLIGHT (TOF) MEASUREMENTS,” co-pending U.S. patent application Ser. No. ______, filed on ______ (Attorney docket number SE-2874-AN/INTEP105USB), entitled “DIRECT CURRENT (DC) CORRECTION CIRCUIT FOR A TIME OF FLIGHT (TOF) PHOTODIODE FRONT END”, co-pending U.S. patent application Ser. No. ______, filed on ______ (Attorney docket number SE-2785-AN/INTEP105USC), entitled “PHOTODIODE FRONT END WITH IMPROVED POWER SUPPLY REJECTION RATIO (PSRR),” co-pending U.S. patent application Ser. No. ______, filed on ______ (Attorney docket number SE-2877-AN/INTEP105USD), entitled “AUTOMATIC CALIBRATION TECHNIQUE FOR TIME OF FLIGHT (TOF) TRANSCEIVERS,” and co-pending U.S. patent application Ser. No. ______, filed on ______ (Attorney docket number SE-2877-AN/INTEP105USE), entitled “SERIAL-CHAINING PROXIMITY SENSORS FOR GESTURE RECOGNITION.” The entireties of each of the foregoing applications are incorporated herein by reference.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 illustrates an exemplary system that detects motion with an arbitrary spatial arrangement of optical sensors.

FIG. 2 illustrates an exemplary system that utilizes preprocessing to detect motion with an arbitrary spatial arrangement of optical sensors.

FIG. 3 illustrates an exemplary methodology for developing a motion recognition system.

FIG. 4 illustrates an exemplary methodology for preprocessing data from a motion recognition system.

FIG. 5 illustrates an exemplary methodology for determining the effectiveness of a spatial configuration of one or more sensors in a motion recognition system.

FIG. 6 illustrates an exemplary functional block diagram for the architecture of the subject disclosure.

DETAILED DESCRIPTION

A category of monolithic devices is emerging that allows electronic products to sense their environment. These include diverse devices, such as, accelerometers, monolithic gyroscopes, light sensors and imagers. In particular, light sensors are one of the simplest and cheapest, allowing their inclusion in multitudes of consumer products, for example, nightlights, cameras, cell phones, laptops etc. Typically, light sensors can be employed in a wide variety of applications related to proximity sensing, such as, but not limited to, detecting the presence and/or distance of a user to the product for the purpose of controlling power, displays, or other interface options.

Infrared (IR) detectors utilize IR light to detect objects within the sense area of the IR sensor. Moreover, IR light is transmitted by an IR Light emitting diode (LED) emitter, which reflects off of objects in the surrounding area and the reflections are sensed by a detector. Moreover, the detector can be a diode, e.g., a PIN diode, and/or any other type of apparatus that converts IR light into an electric signal. The sensed signal is analyzed to determine whether an object is present in the sense area and/or to detect motion within the sense area. In determining whether an object is present in the sense area and/or to detect motion within the sense area, conventional systems can employ an algorithm that depends on a specific spatial arrangement of one or more detectors.

The systems and methods disclosed herein allow an arbitrary spatial arrangement of the detectors. Applying principal component analysis (PCA), the system can be trained with any arbitrary spatial arrangement of detectors. For example, the system can be trained by demonstrating distinct motions within the sense area. In contrast, algorithms that depend on a specific spatial arrangement of the detectors are handicapped by the specific placement requirement. It can be appreciated that although the subject specification is described with respect to IR light, the systems and methods disclosed herein can utilize most any wavelength. As an example, the subject system and/or methodology can be employed for acoustical proximity detection and/or ultrasonic range finding applications.

The subject matter is described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the subject disclosure. It may be evident, however, that the subject matter may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing the subject disclosure. Many modifications may be made to this configuration without departing from the scope or spirit of the claimed subject matter.

Moreover, the word “exemplary” is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the word “exemplary” is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, “X employs A or B” is intended to mean any of the natural inclusive permutations. That is, if X employs A; X employs B; or X employs both A and B, then “X employs A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. In addition, the word “coupled” is used herein to mean direct or indirect electrical or mechanical coupling. Further, the terms “sense area,” “vision field,” “optical field,” and similar terminology are utilized interchangeably in the subject application, unless context warrants particular distinction(s) among the terms. Moreover, the terms “sensor,” “detector,” and similar terminology are utilized interchangeably in the subject application, unless context warrants particular distinction(s) among the terms.

Referring initially to FIG. 1, there illustrated is an example system 100 that employs optical sensors 102 to sense distance, motion and/or ambient light, according to an aspect of the subject specification. In one aspect, the optical sensors 102 can include one or more infrared sensors (e.g., photodiodes, such as tuned positive-intrinsic-negative (PIN) diodes) or any sensors that can convert a light signal into an electrical signal. It can be appreciated that the optical sensors 102 are not limited to utilizing IR light, and, instead, can be any sensor, detector, or combination of sensors and detectors that can utilize light signals of most any wavelength.

The optical sensors 102 can be arranged in any arbitrary spatial configuration. Although three optical sensors 102 are shown for simplicity, a system can employ more optical sensors 102 or fewer optical sensors 102 in any spatial arrangement. It can be appreciated that the optical sensors 102 can be an array of optical sensors 102.

In general, system 100 can be employed in most any light sensing/optical proximity application. For example, a laptop computer can detect a gesture (e.g., a tap or a swipe) on a track pad utilizing optical sensors arbitrarily arranged on the track pad. In another example, a cellular phone or a personal digital assistant (PDA) can detect a gesture (e.g., a tap or a swipe) on a screen utilizing optical sensors arbitrarily arranged on the screen.

The optical sensors 102 can be coupled to a signal processing circuit 104 to transmit sensed outputs 106, 108, 110. In one aspect, the sensed outputs are electrical signals corresponding to sensed light. These electrical signals can vary over time.

The signal processing circuit 104 can analyze the sensed outputs 106, 108, 110 and determine whether an object 112 is present in the sense area and/or detect and identify motion(s) made by the object 112 within the sense area. The object 112 can be most any entity of interest, such as, but not limited to, a human entity, an automated component, a device, an item, an animal, etc. Sensor outputs (arbitrary units) can be captured over time. The following table shows exemplary sensor outputs values for three sensors captured at four different points in times.

t = 0 t = 1 t = 2 t = 3 Sensor 1 200 2500 500 100 Sensor 2 0 0 100 100 Sensor 3 100 200 500 2200

Peaks can then be recognized from the captured sensors data. A peak refers to a sensor output value that is larger than other sensor output values, for a set of captured sensor output values. For example, referring to the table above, Sensor 1 registered a peak magnitude at t=1 and Sensor 3 registered a peak magnitude at t=4. When viewed in this manner, the sensor outputs can effectively form a two-dimensional image with the filled in boxes of the chart below representing the peak values. The system can be programmed such that a sensor output value must be above a certain minimum threshold for the system to detect a peak. The threshold can be set to avoid confusing noise with signal detection. For example, as shown below, Sensors 1 and 3 each record a peak above a threshold, while Sensor 2 does not recognize a peak because no value it records is above a threshold.

The system 100 employs a signal processing circuit 104 that simplifies the sensor data 106, 108, 112 collection process. According to one aspect, the signal processing circuit 104 can be embedded on a single integrated circuit (IC) chip (e.g., a microcontroller). However, the signal processing circuit 104 need not be embedded on a single IC chip, and, instead, components of the signal processing circuit 104 can be distributed among several IC chips.

Signal processing circuit 104 can provide automatic, time-correlated peak detection for the optical sensors 102. For example, the signal processing circuit 104 can automatically record time and value of peaks in magnitude and store the values (e.g., in a database). The time and value of the peaks in magnitude can be stored in a first in first out (FIFO) configuration. One arrangement of optical sensors 102 that can facilitate automated, time-correlated peak detection is a serial chain (e.g., daisy chain) arrangement of optical sensors 102. When the optical sensors 102 are in a serial chain, the signal processing circuit 104 can automatically record the time and value of peaks in magnitude and store the peaks in a first in first out (FIFO) configuration. When an appropriate event occurs (e.g., detection of distance, motion and/or ambient light), the optical sensors 102 can wake up or interrupt the signal processing circuit 104. The signal processing circuit 104 can read out the time and value of peaks from all of the optical sensors 102 that record a peak and reconstruct an image of the event (e.g., motion trajectory) that occurred with the filled in boxes representing peak values. For example:

System 100 processes a reduced quantity of data because only the peaks are recorded, thus reducing the required computational power. This allows system 100 to be implemented on a low-power IC (e.g., a microcontroller) or any other such low-power device.

Referring now to FIG. 2, there illustrated is an example system 200 that utilizes a preprocessing circuit 202 that can preprocess signals 106, 108 and 110 from optical sensors arranged in an arbitrary spatial consideration before signals 106, 108 and 110 reach the signal processing circuit 104.

In an aspect, the preprocessing circuit 202 and the signal processing circuit 104 can be embodied on a single IC chip 204. The preprocessing circuit 202 and the signal processing circuit 104 are not limited to a single IC chip. For example, the preprocessing circuit 202 and the signal processing circuit 104 can, for example, be distributed between one or more IC chips.

According to an aspect, the IC chip 204 can be a low power microcontroller. A microcontroller can have a processor core, a memory and programmable input/output (I/O) peripherals. According to an aspect, the I/O peripherals can include the optical sensors. The preprocessing circuit 202 allows the IC chip 204 to operate the signal processing circuit 104 in a low power mode. For example, the preprocessing circuit 202 takes the task of training the sensors away from the processor 104 so that the processor 104 only needs to match a sensor output to training data in order to recognize a gesture.

The preprocessing circuit 202 employs an algorithm to train the system 200 to recognize peaks in the image data. For example, the algorithm can be a training algorithm that can create a set of training images that can facilitate recognition of a specific type(s) of motion(s) sensed in terms of distance, motion trajectory and/or ambient light.

The algorithm can be one or more of a large class of image processing algorithms to classify types of motions sensed in terms of distance, motion trajectory and/or ambient light. System 200 can be programmed to identify and classify distinct types of motions. For example, the types of motions can be gestures on a surface, for example, a screen of a cellular phone or a track pad of a laptop computer. Each time a distinct type of motion occurs, the system 200 can classify the corresponding data from the optical sensors into one of the distinct types of motion.

According to an aspect, the algorithm can be Principal Component Analysis (PCA) (also known as the “eigenface method”), a statistical method utilized to reduce the dimensionality of a data set. PCA applies an orthogonal transformation to an input dataset that includes data corresponding to the distinct motions to be recognized, and creates a set of training images. The set of training images can be formulated from principal components, an algorithm that utilizes a smaller set variables than the entire trajectory for the motion. The principal components can span a space of reduced dimensionality and can capture variations in the training images in an efficient manner. PCA is utilized as a pre-processing computation to reduce the training images to a smallest set of variables necessary to accurately classify new motions. With the training images, the classification problem can reduce to a set of simple vector projections. This is a task simple enough to be carried out in basic microcontroller firmware.

Referring now to FIG. 3, there illustrated is an exemplary methodology 300 for developing a motion recognition system. The motion recognition system can include one or more sensors and/or light emitting diodes arranged in any arbitrary spatial configuration. This methodology can be utilized, for example, in connection with liquid crystal display (LCD) screens or other devices where sensor placement is not necessarily known a priori.

Methodology 300 begins at element 302 where one or more sensors can be placed on a device, like an LCD screen. Placement of the sensors need not be constrained by any specific placement restrictions. Rather, since methodology 300 does not depend on the placement of the sensors, the sensors can be arranged in any arbitrary location on the device.

At element 304, the motion recognition system can be trained to recognize one or more specific motions. According to an aspect, the one or more specific motions can be specific types of gestures. The specific types of gestures can correspond to specific actions taken by the device. For example, one type of training can be demonstrating each specific type of gesture. This allows the sensors to be placed in an arbitrary special configuration on the device. At element 306, the motion recognition system can utilize the training data to recognize the specific gestures without requiring significant computational power.

Referring now to FIG. 4, there illustrated is a methodology 400 for preprocessing data from a motion recognition system. The motion recognition system can include one or more sensors and/or light emitting diodes arranged in any arbitrary spatial configuration. This methodology can be utilized, for example, in connection with liquid crystal display (LCD) screens or other devices where sensor placement is not necessarily known a priori. The preprocessing can employ PCA to reduce the dimensionality of the dataset.

Methodology 400 begins at element 402 where one or more sensors can be placed on a device, like an LCD screen. Placement of the sensors need not be constrained by any specific placement restrictions. Rather, since methodology 300 does not depend on the placement of the sensors, the sensors can be arranged in any arbitrary location on the device.

At element 404, one or more specific types of motions can be made. For example, motions can include, but are not limited to, gestures (e.g., taping and/or swiping) on or near a surface of a device. According to an aspect, the one or more specific motions can be specific types of gestures. The specific types of gestures can correspond to specific actions taken by the device. For example, each specific type of gesture that will be utilized by the device can be demonstrated.

At element 406, the motion control system can record image data corresponding to each of the one or more specific types of motions. Then, at element 408, the motion control system can employ PCA to preprocess the image data to principal components.

PCA is a statistical method that can be utilized to reduce the dimensionality of the image data. More specifically, PCA can apply an orthogonal transformation to image data, including examples of each distinct motion to be recognized, creating a set of principal components corresponding to each specific type of gesture. PCA is a procedure that can use an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of uncorrelated variables called principal components. The number of principal components is less than or equal to the number of original variables. This transformation is defined in such a way that the first principal component has a variance that accounts for as much of the variability in the data as possible, and each succeeding component in turn has a variance with the constraint that it be orthogonal to the preceding components. Accordingly, the principal components can be a smaller set of variables than the image data spanning a space of reduced dimensionality. The principal components can, therefore, capture variations in the training images in an efficient manner, reducing the training images to a smallest set of variables necessary to accurately classify new motions.

Most of the computation complexity associated with methodology 400 is due to the reduction of the training images to principal components with PCA. This computation can be performed, for example, on a computer. The resulting table of pre-computed principal components can be made small enough to be easily stored in a microcontroller program and/or firmware.

At element 410, the motion recognition system can utilize the training data to recognize the specific gestures without requiring significant computational power. Since the training images are already reduced to a smallest set of variables necessary to accurately classify gestures (e.g., a low dimensional vector), the classification problem can reduce to a set of simple vector projections and one Euclidian distance calculation to find the closest training image, a task simple enough to be carried out in basic microcontroller program and/or firmware. For example, the Euclidian distance between the incoming gesture's vector and each training image's vector can be computed and the best-matching training image can be determined as the training image with a smallest distance to the incoming gesture image.

Since data from the sensors of the motion control system has a sparse nature (e.g., most data points are zero), very few mathematical operations are required for each classification operation. Further reduction of problem complexity can be achieved by using 1 for peaks and 0 for all other data points.

Referring now to FIG. 5, illustrated is a methodology 500 for determining the effectiveness of a spatial configuration of one or more sensors in a motion recognition system. For example, utilizing PCA, methodology 500 can demonstrate when one or more sensors are redundant and/or when too few sensors are utilized for reliable computation. This can maximize the quality of data obtained from the smallest possible number of sensors.

Methodology 500 begins at element 502 where one or more sensors can be placed on a device. For example, the device can be an LCD screen. The motion recognition system can include one or more sensors and/or light emitting diodes arranged in any arbitrary spatial configuration on the LCD screen. Since sensor placement is not known a priori, any number of sensors can be placed on the device in any configuration. This can lead to too many or too few sensors to be utilized, minimizing data quality. For example, if too many sensors are utilized, one or more sensors can be redundant. If too few sensors are utilized, computation may not be reliable.

At element 504, one or more specific types of motions can be made on the device. According to an aspect, the one or more specific motions can be specific types of gestures. The specific types of gestures can correspond to specific actions taken by the device. For example, each specific type of gesture that will be utilized by the device can be demonstrated.

At element 506, the motion control system can record image data corresponding to each of the one or more specific types of motions. Then, at element 508, the motion control system can reduce the image data to a smaller set of variables. For example, the motion recognition system can employ PCA to preprocess the image data to principal components, thereby reducing the dimensionality of the image data to the smallest set of variables (principle components) necessary to accurately classify new motions.

At element 510, the motion recognition system can utilize the training data to recognize the specific gestures without requiring significant computational power. Since the training images are already reduced to a smallest set of variables necessary to accurately classify gestures, the classification problem can reduce to a set of simple vector projections and one Euclidian distance calculation to find the closest training image, a task simple enough to be carried out in basic microcontroller program and/or firmware.

At element 512, the motion recognition system can determine the effectiveness of the spatial configuration of the sensors. According to an aspect, the motion recognition system can employ PCA to quantitatively measure the effectiveness of the spatial configuration of the sensors.

The PCA can demonstrate when one or more of the sensors is redundant or when too few sensors are used for reliable classifications. This allows a maximization of the quality of data obtained from the smallest possible number of sensors. If the training images have weak principal components (e.g., if the resulting vector elements are all about the same magnitude), then the training images do not capture enough variability to achieve good classification. Such weak principal components can imply a poor spatial arrangement of sensors.

In another embodiment, the motion recognition system can employ PCA to determine the best configuration for the one or more sensors. The motion recognition system can test one or more potential configurations, apply PCA, and subsequently determine a most effective configuration of sensors. The most effective configuration of sensors can be utilized at element 502 to begin method 500.

In order to provide additional context for various aspects of the subject specification, FIG. 6 illustrates an exemplary functional block diagram for the architecture 600 of the subject disclosure. In one aspect, the systems (e.g., 100-200) disclosed herein can be employed in a reflection based proximity and motion detector with an integrated ambient light sensor (ALS) depicted in FIG. 6. The architecture 600 includes a LED and associated driver circuitry, a photodiode sensor, an analog front end and signal processing, data conversion circuitry, digital control and signal processing, interface circuitry and results display. The architecture 600 adaptively optimizes sensitivity and power for a given environment. Moreover, the architecture 600 derives significant performance improvements from its novel ALS structure, and its light emitting diode (LED) driver circuitry is much more efficient than the conventional resistive drive.

According to an aspect of the subject disclosure, the architecture 600 includes a Resonant Front End 602, which includes a Trans-Impedance Resonator (TIR). In the architecture 600, the TIR 602 is used in place of the Trans-Inductance Amplifier (TIA), which is conventionally used. Although the TIR 602 plays the same role as a conventional TIA, the TIR 602 gives an order of magnitude improvement in achievable Signal-to-Noise-Ratio (SNR) due to its band-pass nature (e.g., TIR 602 includes an inductor and a capacitor), which allows for an increased range of sensing. The capacitor of the TIR can include the capacitance of the photodiode that is being resonated. The band-pass nature of the TIR 602 causes the architecture 600 to operate over a narrow band of frequencies, which allows for little noise compared to the wide band TIA.

According to another aspect of the subject disclosure, the ALS 610 uses a light to frequency converter based on a relaxation oscillator instead of the conventional TIA. A relaxation oscillator is an oscillator based upon the relaxation behavior of a physical system. An exemplary implementation for the relaxation oscillator of the subject disclosure can be done by connecting the inverting input of an Operational Amplifier (Op Amp) to a fixed bias voltage via a switch and also the photodiode, with the non-inverting input connected to ground. When the switch to the fixed bias voltage is opened, the photodiode will discharge towards ground. The rate of discharge will depend on the photodiode current, which is a measure of the incident ambient light. When the photodiode is discharged to ground, the Computer Programmable Logic Device (CPLD) resets the oscillator by switching back in the bias voltage. The CPLD counts the number of cycles that the photodiode takes to discharge, and thus can estimate the ambient light intensity incident on the photodiode. The ALS 610 can be used for ambient light sensing applications and the TIR 602 can be used for proximity and motion sensing applications.

The output of the Front end 602 is subjected to multiple stages of voltage gain 616 to maximize the SNR of the output signal. The voltage gain is adaptively set based on the magnitude of the signal received from the Front end 602, which is potentially made up of both measureable interferers such as a backscatter and a crosstalk from the LED, and also the desired signal to be measured. The interferers are dynamically calibrated out of the measurement to improve the sensitivity. According to another aspect of the subject disclosure, the LED drive circuitry 656 uses an inductive drive, which results in a significant efficiency improvement over the conventional resistive drive.

The architecture 600 also includes a Quad Demodulator with low pass filters (LPFs) 620, dual [I & Q] Analog to Digital Converters (ADCs) 626, Digital to Analog Converters (DACs) 630 driven by the bias voltage provided by the Automatic Gain Control module, Oscillator DACs 644 for I and Q carriers, the Universal Serial Bus (USB) processor for Control Interface, and the Computer Programmable Logic Device (CPLD) that include several modules. The I and Q relate to In-Phase and Quadrature demodulation components.

The USB processor can include one or more USB processors. For example, the pre-processor, as described above, can be associated with a first USB processor and the processor, as described above, can be associated with a second USB processor. Additionally or alternatively, the preprocessor can be associated with a dedicated part of the USB processor and/or a co-processor that can take over a portion of the load of the processor.

Quadrature amplitude modulation (QAM) is both an analog and a digital modulation scheme. Moreover, QAM is a modulation scheme in which two sinusoidal carriers, one exactly 90 degrees out of phase with respect to the other, are used to transmit data over a given physical channel. Since the orthogonal carriers occupy the same frequency band and differ by a 90 degree phase shift, each can be modulated independently, transmitted over the same frequency band, and separated by demodulation at the receiver. Thus, QAM enables data transmission at twice the rate of standard pulse amplitude modulation (PAM) without any degradation in the bit error rate (BER). In one example a numerically controlled oscillator (NCO) can be employed to design a dual-output oscillator that accurately generates the in-phase and quadrature carriers used by a QAM modulator and/or demodulator. A filter, for example, a raised cosine finite impulse response (FIR) filter can be utilized to filter the data streams before modulation onto the quadrature carriers.

The in-phase and quadrature demodulated components are created by multiplying the signal by both a carrier signal, and also a signal 90 degrees out of phase of that carrier, and low pass filtering the result (620 in FIG. 6). The resultant I and Q are a baseband representation of the received signal. In one example, the phase of the derivative of the I and Q channels can be obtained, which is indicative of the distance of the target to be calculated. Further, the position of a moving object can be accurately identified based on the phase data. Typically, the resultant phase information can be used as a direct output of the system as a measure of distance/position, and/or can be used to reconstruct the static component of the signal and allow the calibration of a non-derivative TOF measurement.

The architecture 600 of the subject disclosure can be used in many applications including computers, automotive, industrial, television displays and others. For example, the architecture 600 can be used to detect that a user has entered the room and automatically cause a laptop computer in hibernation mode to wake up and enter into the active mode so that the user can use it. In another example, the architecture 600 of the subject disclosure can be used to automatically and adaptively adjust the intensity of a liquid crystal display (LCD) based on the ambient lighting conditions. According to an aspect of the subject disclosure, the architecture 600 can perform motion and proximity sensing at a range of up to 1-2 meters. According to another aspect of the subject disclosure, the architecture 600 of the subject disclosure can perform its operations by using less than twenty milli-watts (mW) of power.

In one embodiment of the subject disclosure, the entire architecture 600 can be implemented in a single integrated circuit chip (IC). In another embodiment of the subject disclosure, all components of the architecture 600 can be implemented in the IC except for the two inductors for the TIR 602 and the LED driver circuitry 656 and the LED, which can be implemented outside the IC. In yet another embodiment of the subject disclosure, all components of the architecture 600 can be implemented in the IC except for the TIR 602 inductor, the LED and the inductor and the resistor for the LED driver circuitry, which can be implemented outside the IC. In still another embodiment of the subject disclosure, various components of the architecture 600 can be located inside or outside the IC.

What has been described above includes examples of the subject disclosure. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the claimed subject matter, but many further combinations and permutations of the subject disclosure are possible. Accordingly, the claimed subject matter is intended to embrace all such alterations, modifications, and variations that fall within the spirit and scope of the appended claims.

In particular and in regard to the various functions performed by the above described components, devices, circuits, systems and the like, the terms (including a reference to a “means”) used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., a functional equivalent), even though not structurally equivalent to the disclosed structure, which performs the function in the herein illustrated exemplary aspects of the claimed subject matter. In this regard, it will also be recognized that the disclosure includes a system as well as a computer-readable medium having computer-executable instructions for performing the acts and/or events of the various methods of the claimed subject matter.

The aforementioned systems/circuits/modules have been described with respect to interaction between several components. It can be appreciated that such systems/circuits/modules and components can include those components or specified sub-components, some of the specified components or sub-components, and/or additional components, and according to various permutations and combinations of the foregoing. Sub-components can also be implemented as components communicatively coupled to other components rather than included within parent components (hierarchical). Additionally, it should be noted that one or more components may be combined into a single component providing aggregate functionality or divided into several separate sub-components, and any one or more middle layers, such as a management layer, may be provided to communicatively couple to such sub-components in order to provide integrated functionality. Any components described herein may also interact with one or more other components not specifically described herein but generally known by those of skill in the art.

In addition, while a particular feature of the subject disclosure may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. Furthermore, to the extent that the terms “includes,” “including,” “has,” “contains,” variants thereof, and other similar words are used in either the detailed description or the claims, these terms are intended to be inclusive in a manner similar to the term “comprising” as an open transition word without precluding any additional or other elements.

Claims

1. An apparatus, comprising:

a processor including a motion detector circuit;
a preprocessor communicably coupled to the processor;
at least one detector that generates an electrical signal in response to detecting at least one predetermined motion;
the preprocessor for identifying one or more components of the electrical signal; and
a motion detector circuit for correlating the one or more components of the electrical signal with the predetermined motion for storage in a database.

2. The apparatus of claim 1, wherein the at least one detector is arranged in an arbitrarily spatial configuration on the apparatus.

3. The apparatus of claim 1, wherein the at least one detector is at least one light sensor.

4. The apparatus of claim 3, wherein the at least one light sensor comprises at least one infrared (IR) sensor.

5. The apparatus of claim 1, wherein the preprocessor employs Principal Component Analysis (PCA) to identify the one or more components of the electrical signal.

6. The apparatus of claim 5, wherein preprocessor employs the PCA to reduce the electrical signal into principal components of the electrical signal.

7. The apparatus of claim 5, wherein the processor is operating in a low power mode.

8. The apparatus of claim 1, wherein the database is used for motion recognition.

9. The apparatus of claim 8, further comprising: the motion detector circuit for retrieving the correlation information from the database.

10. The apparatus of claim 9, further comprising: the motion detector circuit compares an electrical signal corresponding to a detected motion to the database by utilizing at least one Euclidian distance calculation.

11. A method, comprising:

arranging one or more sensors in an arbitrary spatial configuration on a device;
sensing image data corresponding to at least one test motion on the device;
extracting test data from the image data;
correlating the test data with the test motion;
comparing data related to a motion to the test data; and
determining an effectiveness of the arbitrarily spatial configuration.

12. The method of claim 11, further comprising: rearranging the one or more sensors in a second arbitrary spatial configuration on the device.

13. The method of claim 12, wherein the rearranging further comprises reducing the number of sensors.

14. The method of claim 12, wherein the rearranging further comprises increasing the number of sensors.

15. The method of claim 11, wherein the processing further comprises applying Principal Component Analysis (PCA) to extract the test data from the image data.

16. The method of claim 15, wherein the applying further comprises reducing a dimensionality of the image data to a smaller dimensionality of the test data.

17. The method of claim 11, wherein the applying further comprises utilizing at least one computer to apply the PCA.

18. The method of claim 11, wherein the correlating further comprises calculating at least one Euclidian distance.

19. A system, comprising: at least one sensor that generates an electric signal based on the reflected portion of the frequency modulated signal; a memory that stores training data corresponding to at least one gesture, wherein the training is generated from principal components of an electrical signal generated by the at least one sensor corresponding to the gesture; and a signal processing circuit that compares the electric signal based on the reflected portion of the frequency modulated signal with the training data, wherein, the signal processing circuit for determining if the electrical signal is indicative of the at least one gesture.

at least one light emitting diode (LED) that emits a frequency modulated signal, wherein at least a portion of the frequency modulated signal reflects back from a moving object;

20. The system of claim 19, wherein the at least one sensor is arranged in an arbitrary spatial location with respect to the at least one LED.

Patent History
Publication number: 20110182519
Type: Application
Filed: Jan 25, 2011
Publication Date: Jul 28, 2011
Applicant: INTERSIL AMERICAS INC. (Milpitas, CA)
Inventors: Warren Craddock (San Francisco, CA), David W. Ritter (San Jose, CA), Philip Golden (Menlo Park, CA)
Application Number: 13/013,676
Classifications
Current U.S. Class: Feature Extraction (382/190); Waveform Analysis (702/66); Machine Learning (706/12)
International Classification: G06K 9/46 (20060101); G06F 19/00 (20110101); G06F 15/18 (20060101);