System and method for personalized preference optimization
A system and method is provided for using biometric data from an individual to determine at least one emotion, mood, physical state, or mental state (“state”) of the individual, which is then used, either alone or together with other data, to provide the individual with certain web-based data. In one embodiment of the present invention, a Web host is in communication with at least one network device, where each network device is operated by an individual and is configured to communicate biometric data of the individual to the Web host. The Web host is then configured to use the biometric data to determine at least one state of the individual. The determined state, either alone or together with other data (e.g., interest data), is then used to provide the individual with certain content (e.g., web-based data) or to perform a particular action.
This application claims priority pursuant to 35 U.S.C. § 119(e) to U.S. Provisional Application No. 62/214,496, filed Sep. 4, 2015, which application is specifically incorporated herein, in its entirety, by reference.
BACKGROUND OF THE INVENTION1. Field of the Invention
The present invention relates to use of biometric data from an individual to determine at least one emotional state, mood, physical state, or mental state (“state”) of the individual, either (i) at a particular time or (ii) in response to at least one thing in a proximity of the individual at a time that the biometric data is being collected, and using the at least one state, either by itself or together with other data (e.g., data related to the at least one thing, interest data, at least one request (e.g., question, command, etc.), etc.) to provide a particular response (e.g., provide certain web-based data to the individual, perform a particular action, etc.).
2. Description of Related Art
Recently, devices have been developed that are capable of measuring, sensing, or estimating at least one metric related to a human characteristic, commonly referred to as biometric data. For example, devices that resemble watches have been developed that are capable of measuring an individual's heart rate or pulse, and using that data together with other information (e.g., the individual's age, weight, etc.) to calculate a resultant, such as the total calories burned by the individual in a given day. Similar devices have been developed for measuring, sensing, or estimating other metrics, such as blood pressure, breathing patterns, and blood-alcohol level, or for the identification or recognition of individuals in, amongst others, security applications, through the recording and analysis with devices such as iris scanners, or microphones with voice pattern recognition, of the individual's unique biometric or physiological characteristics, to name a few. These devices are generically referred to as biometric devices.
While the types of biometric devices continue to grow, the way in which biometric data is used remains relatively stagnant. For example, heart rate data is typically used to give an individual information on their pulse and calories burned. By way of another example, eye movement data can be used to determine whether and to what extent the individual is under the influence of alcohol. By way of yet another example, an individual's breathing pattern may be monitored by a doctor, nurse, or medical technician to determine whether the individual suffers from sleep apnea.
While biometric data is useful in and of itself, such data may also indicate how the individual is feeling (e.g., at least one emotional state, mood, physical state, or mental state) at a particular time or in response to the individual being in the presence of at least one thing (e.g., a person, a place, textual content (or words included therein or a subject matter thereof), video content (or a subject matter thereof), audio content (or words included therein or a subject matter thereof), etc.). Thus, it would be advantageous, and a need exists, for a system and method that uses the determined state (e.g., emotion state, mood, physical state, or mental state), either alone or together with other information (e.g., at least one thing, interest data, at least one request (e.g., question, command, etc.), etc.), to produce a certain result, such as provide the individual with certain web-based content (e.g., a certain web page, a certain advertisement, etc.) and/or perform at least one action. While providing a particular message to every known biometric state may not be reasonable for content creators to understand and target, human emotions and moods provide a specific context for targeting messages that is easily understood by content creators.
SUMMARY OF THE INVENTIONThe present invention is directed toward personalization preference optimization, or to the use of biometric data from an individual to determine at least one emotional state, mood, physical state, or mental state (“state”) of the individual, which is then used, either alone or together with other data (e.g., at least one thing in a proximity of the individual at a time that the individual is experiencing the emotion, interest data from a source of web-based data (e.g., bid data, etc.), etc.) to provide the individual with certain web-based data or to perform a particular action.
Preferred embodiments of the present invention operate in accordance with a Web host in communication with a plurality of content providers (i.e., sources) and at least one network device via a wide area network (WAN), wherein the network device is operated by an individual and is configured to communicate biometric data of the individual to the Web host. The content providers provide the Web host with content, such as websites, web pages, image data, video data, audio data, advertisements, etc. The Web host is then configured to receive biometric data from the network device, where the biometric data is acquired from and/or associated with an individual that is operating the network device. An application is then used to determine at least one emotion, mood, physical state, or mental state from the received biometric data. This is done using known algorithms and/or correlations between biometric data and various states, as stored in the memory device.
In one embodiment of the present invention, content providers may express interest in providing the web-based data to an individual in a particular emotional state. In another embodiment of the present invention, content providers may express interest in providing the web-based data to an individual or other concerned party (such as friends, employer, care provider, etc.) that experienced a particular emotion in response to a thing (e.g., a person, a place, a subject matter of textual content, a subject matter of video content, a subject matter of audio content, etc.). The interest may be a simple “Yes” or “No,” or may be more complex, like interest on a scale of 1-10, an amount the content owner is willing to pay per impression (CPM), or an amount the content owner is willing to pay per click (CPC).
The interest data, alone or in conjunction with other data (e.g., randomness, demographics, etc.), may be used by the application to determine content data (e.g., an advertisement, etc.) that should be provided to the individual. For example, if the interest data includes different bids for a particular emotion or an emotion-thing relationship, the application may provide the advertisement with the highest bid to the individual that experienced the emotion. In other embodiments, other data is taken into consideration in providing content to the individual. In these embodiments, at least interest data is taken into account in selecting the content that is to be provided to the individual.
In one method of the present invention, biometric data is received from an individual and used to determine a corresponding emotion of the individual, such as happiness, anger, surprise, sadness, disgust, or fear. It is to be understood that emotional categorization is hierarchical and that such a method may allow targeting more specific emotions such as ecstasy, amusement, or relief, which are all subsets of the emotion of joy. A determination is made as to whether the emotion is the individual's current state, or whether it is based on the individual's response to a thing (e.g., a person, place, information displayed to the individual, etc.). If the emotion is the individual's current state, then content is selected based on at least the individual's current emotional state and interest data. If, however, the emotion is the individual's response to a thing, then content is selected based on at least the individual's emotional response to the thing (or subject matter thereof) and interest data. The selected content is then provided to the individual, or network device operated by the individual.
Emotion, mood, physical, or mental state of an individual can also be taken into consideration when performing a particular action or carrying out a particular request (e.g., question, command, etc.). In other words, prior to performing a particular action (e.g., under the direction of an individual, etc.), a network-connected or network-aware system or device may take into consideration an emotion, mood, physical, or mental state of the individual. For example, a command or instruction provided by the individual, either alone or together with other biometric data related to or from the individual, may be analyzed to determinate the individual's current mood, emotional, physical, or mental state. The network-connected or network-aware system or device may then take the individual's state into consideration when carrying out the command or instruction. Depending on the individual's state, the system or device may warn the individual before performing the requested action, or may perform another action, either in additional to or instead of the requested action. For example, if it is determined that a driver of a vehicle is angry or intoxicated, the vehicle may provide the driver with a warning before starting the engine, may limit maximum speed, or may prevent the driver from operating the vehicle (e.g., switch to autonomous mode, etc.).
A more complete understanding of a system and method for using biometric data to determine at least one emotional state, mood, physical state, or mental state (“state”) of an individual, wherein at least said state is used to provide or perform a particular result, will be afforded to those skilled in the art, as well as a realization of additional advantages and objects thereof, by a consideration of the following detailed description of the preferred embodiment. Reference will be made to the appended sheets of drawings, which will first be described briefly.
The present invention is described as personalization preference optimization, or using at least one emotional state, mood, physical state, or mental state (“state”) of an individual (e.g., determined using biometric data from the individual, etc.) to determine a response, which may include web-based data that is provided to the individual as a result of the at least one state, either alone or together with other data (e.g., at least one thing (or data related thereto) in a proximity of the individual at a time that the individual is experiencing the at least one emotion, etc.).
As shown in
While
With reference to
The Web host 102 is then configured to receive biometric data from the network device 106. As discussed above, the biometric data is preferably related to (i.e., acquired from) an individual who is operating the network device 106, and may be received using at least one biometric sensor 108, such as an external heart rate sensor, etc. As discussed above, the present invention is not limited to the biometric sensor 108 depicted in
It should be appreciated that the present invention is not limited to any particular type of biometric data, and may include, for example, heart rate, blood pressure, breathing rate, temperature, eye dilation, eye movement, facial expressions, speech pitch, auditory changes, body movement, posture, blood hormonal levels, urine chemical concentrations, breath chemical composition, saliva chemical composition, and/or any other types of measurable physical or biological characteristics of the individual. The biometric data may be a particular value (e.g., a particular heart rate, etc.) or a change in value (e.g., a change in heart rate), and may be related to more than one characteristic (e.g., heart rate and breathing rate).
It should also be appreciated that while best results come from direct measurement of known individuals, the same methods of correlation can be applied to general categories of people. An example is that a facial recognition system may know that 90% of the people at a particular location, such as a hospital, are fearful and that an individual is known to be at that location. Even if biometric data of that individual is not shared with the system, the correlation may be applied, preserving privacy and still allowing for statistically significant targeting. Another example would be a bar that had urine chemical analyzers integrated into the bathrooms, providing general information about people at the bar. This data could then be coordinated with time and location back to a group of people and provide significant correlations for targeting messages to an individual (e.g., an individual who was at the bar during that time).
As shown in
Information that correlates different biometric data to different emotions or the like can come from different sources. For example, the information could be based on laboratory results, self-reporting trials, and secondary knowledge of emotions (e.g., the individual's use of emoticons and/or words in their communications). Because some information is more reliable than other information, certain information may be weighted more heavily than other information. For example, in certain embodiments, clinical data is weighted heavier than self-reported data. In other embodiments, self-reported data is weighted heavier than clinical data. Laboratory (or learned) results may include data from artificial neural networks, C4.5, classification and/or regression trees, decision trees, deep learning, dimensionality reduction, elastic nets, ensemble learning, expectation maximization, k-means, k-nearest neighbor, kernel density estimation, kernel principle components analysis, linear regression, logical regression, matrix factorization, naïve bayes, neighbor techniques, partial least squares regression, random forest, ridge regression, support vector machines, multiple regression and/or all other learning techniques generally known to those skilled in the art.
Self-reported data may include data where an individual identifies their current state, allowing biometric data to be customized for that individual. For example, computational linguistics could be used to identify not only what an individual is saying but how they are saying it. In other words, the present invention could be used to analyze and chart speech patterns associated with an individual (e.g., allowing the invention to determine who is speaking) and speech patterns associated with how the individual is feeling. For example, in response to “how are you feeling today,” the user may state “right now I am happy,” or “right now I am sad.” Computational linguistics could be used to chart differences in the individual's voice depending on the individual's current emotional state, mood, physical state, or mental state. Because this data may vary from individual to individual, it is a form of self-reported data, and referred to herein as personalized artificial intelligence. The accuracy of such data, learned about the individual's state through analysis of the individual's voice (and then through comparison both to the system's historical knowledge base of states of the individual acquired and stored over time and to a potential wider database of other users' states as defined by analysis of their voice), can further be corroborated and or improved, through cross-referencing the individual's self-reported data with other biometric data, such as heart rate data, etc., when a particular state is self-reported and detected and recorded by the system onto its state profile database.
The collected data, which is essentially a speech/mood profile for the individual (a form of ID which is essentially the individual's unique state profile), can be used by the system that gathered the biometric data or shared with other systems (e.g., the individual's smartphone, the individual's automobile, a voice or otherwise biometrically-enabled device or appliance (including Internet of Things (IOT) devices or IOT system control devices), Internet or “cloud” storage, or any other voice or otherwise biometrically-enabled computing or robotic device or computer operating system with the capability of interaction with the individual, including but not limited to devices which operate using voice interface systems such as Apple's Siri, Google Assistant, Microsoft Cortana, Amazon's Alexa, and their successor systems). Because the shared information is unique to an individual, and can be used to identify a current state of the individual, it is referred to herein as personalized artificial intelligence ID, or “PAIID.” In one embodiment of the present invention, the self-reported data can be thought of as calibration data, or data that can be used to check, adjust, or correlate certain speech patterns of an individual with at least one state (e.g., at least one emotion, at least one mood, at least one physical state, or at least one mental state). The knowledge and predictive nature inherent in the PAIID will be continuously improved through the application of deep learning methodology with data labelling and regression as well as other techniques apparent to those skilled in the art.
With respect to computational linguistics, it should be appreciated that the present invention goes beyond using simple voice analysis to identify a specific individual or what the individual is saying. Instead, the present invention can use computational linguistics to analyze how the individual is audibly expressing himself/herself to detect and determine at least one state, and use this determination as an element in providing content to the user or in performing at least one action (e.g., an action requested by the user, etc.).
It should be appreciated that the present invention is not limited to using a single physical or biological feature (e.g., one set of biometric data) to determine the individual's state. Thus, for example, eye dilation, facial expressions, and heart rate could be used to determine that the individual is surprised. It should also be appreciated that an individual may experience more than one state at a time, and that the received biometric data could be used to identify more than one state, and a system could use their analysis of the individual's state or combination of states to assist it in deciding how best to respond, for example, to a user request, or a user instruction, or indeed whether to do so at all. It should further be appreciated that the present invention is not limited to the six emotions listed in
Despite preferred embodiments, the present invention is not limited to the use of biometric data (e.g., gathered using sensors, microphones, and/or cameras) solely to determine an individual's current emotional state or mood. For example, an individual's speech (either alone or in combination with other biometric data, such as the individual's blood pressure, heart rate, etc.) could be used to determine the individual's current physical and/or mental health. Examples of physical health include how an individual feels, such as healthy, good, poor, tired, exhausted, sore, achy, and sick (including symptoms thereof, such as fever, headache, sore throat, congested, etc.), and examples of mental health include mental states, such as clear-headed, tired, confused, dizzy, lethargic, disoriented, and intoxicated. By way of example, computational linguistics could be used to correlate speech patterns to at least one physical and/or mental state. This can be done using either self-reported data (e.g., analyzing an individual's speech when the individual states that they are feeling fine, under the weather, confused, etc.), general data that links such biometric data to physical and/or mental state (e.g., data that correlates speech patterns (in general) to at least one physical and/or mental states), or a combination thereof. Such a system could be used, for example, in a hospital to determine a patient's current physical and/or mental state, and provide additional information outside the standard physiological or biometric markers currently utilized in patient or hospital care. If the physical and/or mental state is above/below normal (N), which may include a certain tolerance (T) in either direction (e.g., N+/−T) through the patient making a request or statement, or through response to a question generated by the system, a nurse or other medical staff member may be notified. This would have benefits such as providing an additional level of patient observation automation or providing early warning alerts or reassurance about the patient through system analysis of their state.
As shown in
In one embodiment of the present invention, a source of web-based data (e.g., content provider) may express interest in providing the web-based data to an individual in a particular emotional state. For example, as shown in
Another embodiment of the invention may involve a system integrated with at least one assistance system, such as voice controls or biometric-security systems, where the emotionally selected messages are primarily warnings or safety suggestions, and are only advertisements in specific relevant situations (discussed in more detail below). An example would be of a user who is using a speech recognition system to receive driving directions where the user's pulse and voice data indicate anger. In this case, the invention may tailor results to be nearby calming places and may even deliver a mild warning that accidents are more common for agitated drivers. This is an example where the primary purpose of the use is not the detection of emotion, but the emotion data can be gleaned from such systems and used to target messages to the individual, contacts, care-providers, employers, or even other computer systems that subscribe to emotional content data. An alternate example would be a security system that uses retinal scanning to identify pulse and blood pressure. If the biometric data correlates to sadness, the system could target the individual with uplifting or positive messages to their connected communication device or even alert a care-provider. In other instances, for example with a vehicle equipped with an autonomous driving system, based on the system's analysis of the biometric feedback of the individual, the driving system could advise on exercising caution or taking other action in the interests of the driver and others (e.g., passengers, drivers of other vehicles, etc.).
It should be noted that in this invention some use cases the individual's private data is provided with the users consent to the system, but in many cases the emotional response could be associated with a time-of-day, a place, or a given thing (e.g., jewelry shop, etc.), so personally identifying information (PII) does not need to be shared with the message provider. In the example of a jewelry shop, the system simply targets individuals and their friends with strong joy correlations. While in certain embodiments, individuals may be offered the opportunity to share their PII with message providers, the system can function without this level of information.
The interest data, and perhaps other data (e.g., randomness, demographics, etc.) may be used by the application (
It should be appreciated that the “response” to an individual in a particular state, or having an emotional response to a thing, is not limited to providing the individual with web-based content, and may include any action consistent with the determined state. In other words, the determined state can be used by the host (e.g., automobile, smartphone, etc.) to determine context, referred to herein as “situational context.” For example, as shown in
In one embodiment of the present invention, the host 1004 is a network-enabled device and is configured to communicate with at least one remote device (e.g., 1006, 1008, 1010) via a wide area network (WAN) 1000. For example, the host 1004 may be configured to store/retrieve individual state profiles (e.g., PAIID) on/from a remote database (e.g., a “cloud”) 1010, and/or share individual state profiles (e.g., PAIID) with other network-enabled devices (e.g., 1006, 1008). The profiles could be stored for future retrieval, or shared in order to allow other devices to determine an individual's current state. As discussed above, the host 1004 may gather self-reporting data that links characteristics of the individual to particular states. By sharing this data with other devices, those devices can more readily determine the individual's current state without having to gather (from the individual) self-reporting (or calibration) data. The database 1010 could also be used to store historical states, or states of the individual over a period of time (e.g., a historical log of the individual's prior states). The log could then be used, either alone or in conjunction with other data, to determine an individual's state during a relevant time or time period (e.g., when the individual was gaining weight, at the time of an accident, when performing a discrete or specific action, etc.), or to determine indications as to psychological aptitude or fitness to perform certain functions where, for example an individual's state is of critical importance, such as, but not limited to piloting a plane, driving a heavy goods' vehicle, or trading instructions on financial or commodities exchanges.
The state log could be further utilized to generate a state “bot” which is an agent of the individual capable of being distributed over a network to look for information on behalf of the individual which is linked to a particular thing the individual has an “interest” in, or wishes to be informed of, either positive or negative, conditional on their being in that particular state.
In an alternate embodiment, information, such as historical logs or individual state profiles (e.g., PAIID) are also, or alternatively, stored on a memory device 1024 on the host 1004 (see
It should be appreciated that in embodiments where the individual is responding to a thing, the thing could be anything in close proximity to the individual, including a person (or a person's device (e.g., smartphone, etc.)), a place (e.g., based on GPS coordinates, etc.), or content shown to the user (e.g., subject matter of textual data like an email, chat message, text message, or web page, words included in textual data like an email, chat message, text message, or web page, subject matter of video data, subject matter of audio data, etc.). The “thing” or data related thereto can either be provided by the network device to the Web host, or may already be known to the Web host (e.g., when the individual is responding to web-based content provided by the Web host, the emotional response thereto could trigger additional data, such as an advertisement).
A method of carrying out the present invention, in accordance with one embodiment of the present invention, is shown in
It should be appreciated that the present invention is not limited to the method shown in
While biometric data, and the like, can be very simple in nature (e.g., identifying the characteristic being measured, such as blood pressure, and the measured value, such as 120/80), it can also be quite complex, allowing for data to be stored for subsequent use (e.g., creating profiles, charts, etc.). For example, in one embodiment of the present invention, as shown in
A method of carrying out the present invention, in accordance with another embodiment of the present invention, is shown in
It should be appreciated that the present invention is not limited to the method shown in
Having thus described several embodiments of a system and method for using biometric data to determine at least one state, and using the same to perform a particular result (i.e., personalization preference optimization), it should be apparent to those skilled in the art that certain advantages of the system and method have been achieved. It should also be appreciated that various modifications, adaptations, and alternative embodiments thereof may be made within the scope and spirit of the present invention. The invention is solely defined by the following claims.
Claims
1. A method for using at least self-reported and biometric data to determine at least one state of a user and to perform at least one action in response to said determined state, said determined state being one of an emotional state and a physical state of said user, comprising the steps of:
- receiving by a processor first biometric data of said user, said first biometric data being received from at least one sensor;
- using at least said first biometric data to infer a first state of said user at a time that said first biometric data is received;
- receiving by said processor self-reporting data from said user, said self-reporting data being received immediately after said first biometric data is received and allowing said user to explicitly identify their current state, thereby indicating whether said inferred state is correct;
- storing said first biometric data and said self-reporting data in a memory to associate said first biometric with said self-reporting data;
- receiving by said processor a request from said user to perform an action, said request being received after said biometric data;
- using at least said first biometric data, said request, and said self-reporting data to determine a second state of said user; and
- performing said at least one action based at least on said second state, wherein said at least one action is said requested action if said second state is normal and therefore consistent with said performance of said requested action and said at least one action is a second action if said second state is abnormal and therefore inconsistent with said performance of said requested action, said second action including notifying said user of (i) said second state and (ii) an alternate action that could be performed instead of said requested action, wherein said alternate action is an unrequested action and different from said requested action, said alternate action being performed if said user indicates in the affirmative that said alternate action should be performed, otherwise, in the absence of said affirmative indication, performing said requested action.
2. The method of claim 1, wherein said first biometric data includes at least one change in facial muscle expressions, breathing rates, speech pitch, auditory changes, and body movement.
3. The method of claim 1, wherein said first state comprises one of happiness, sadness, surprise, anger, disgust, fear, euphoria, attraction, love, calmness, amusement, excitement, tiredness, hunger, thirst, well-being, sick, failure, triumph, interest, enthusiasm, animation, reinvigoration, and satisfaction.
4. The method of claim 1, wherein said step of receiving said self-reporting data from said user comprising receiving at least one of audible data from said user via a microphone and tactile data from said user via an input device.
5. The method of claim 1, further comprising the step of receiving by said processor ambient data comprising at least one of location of said user, temperature at said location, altitude at said location, and time of day, wherein said processor further uses said ambient data to determine said second state of said user at said time that said second biometric data is received.
6. The method of claim 1, wherein said step of using said first biometric data to infer said first state further comprises comparing said first biometric data to known correlations between different biometric data and different states to infer said first state.
7. The method of claim 1, wherein said second state is further determined from other biometric data and other self-reporting data, where said individual ones of said other self-reporting data are received immediately after individual ones of said other biometric data.
8. The method of claim 1, further comprising a step of performing both said requested action and said alternate action.
9. The method of claim 1, further comprising the step of prompting said user to explicitly identify their current state.
10. The method of claim 9, wherein said step of prompting said user to explicitly identify their current state further includes providing said user with said first inferred state.
11. The method of claim 1, wherein said step of using at least said first biometric data and said self-reporting data to determine said second state comprises checking whether said explicitly identified state matches said first inferred state.
12. The method of claim 1, wherein second biometric data is extracted from said request and used to along with said first biometric data and said self-reporting data to determine said second state.
13. The method of claim 1, further comprising the step of providing said first state to said user prior to receiving said self-reporting data.
14. The method of claim 13, wherein said self-reporting data comprises at least one of confirming, denying, and correcting said first state.
15. The method of claim 1, further comprising the step of determining an intensity level for said second state, said intensity level being a numerical value corresponding to a strength of said second state, said action being further based on at least said intensity level.
16. A system for determining at least one state of a user, said at least one state being one of an emotional state and a physical state of said user, comprising:
- at least one processor; and
- at least one memory device in communication with said processor and for storing machine readable instructions, said machine readable instructions being adapted to perform the step of: receiving first biometric data of said user, said first biometric data being received from at least one sensor; using at least said first biometric data to infer a first state of said user at a time that said first biometric data is received; receiving self-reporting data from said user, said self-reporting data being received immediately after said first biometric data is received and allowing said user to explicitly identify their current state, said state being said first state or a third state; storing said first biometric data and said self-reporting data in said at least one memory device so that said first biometric data is linked to said self-reporting data; receiving a request from said user to perform an action, said request being received after said self-reporting data has been received; using at least said first biometric data, said request, and said self-reporting data to determine a second state of said user; and performing at least one action, said action being based on at least said second state, wherein said action is said requested action if said second state is normal and therefore consistent with said performance of said requested action and said at least one action is a second action if said second state is abnormal and therefore inconsistent with said performance of said requested action, said second action including then notifying said user of (i) said second state and (ii) an alternate action that could be performed instead of said requested action, wherein said alternate action is an unrequested action and different from said requested action, said alternate action being performed if said user indicates in the affirmative that said alternate action should be performed, otherwise, in the absence of said affirmative indication, performing said requested action.
17. The system of claim 16, wherein said first biometric data includes at least one change in facial muscle expressions, breathing rates, speech pitch, auditory changes, and body movement.
18. The system of claim 16, further comprises at least one of a microphone and a keyboard for receiving said self-reporting data from said user.
19. The system of claim 16, wherein said second state is further determined from other biometric data and other self-reporting data from at least one other user.
20. The system of claim 16, wherein said sensor is a microphone.
21. The system of claim 16, further comprising an Internet of Things (IOT), said IOT comprising said at least one processor and said at least one memory device.
22. The system of claim 16, further comprising prompting said user to identify their current state.
23. The system of claim 16, wherein said machine readable instructions are further adapted to perform said requested action in addition to said alternate action.
24. The system of claim 23, wherein said digital data is further selected from said plurality of digital data based on at least one thing in a proximity of said user at said time that said request is received said requested action is performed before said alternate action.
25. The method of claim 16, wherein said machine readable instructions are further adapted to provide said first state to said user, said self-reporting data comprising at least one of confirming, denying, and correcting said first state.
26. The method of claim 16, wherein said machine readable instructions are further adapted to determine a strength of said second state, said action being further based on at least said strength of said second state.
27. A system that uses artificial intelligence (AI) to determine a state and to use said state to perform at least one action, comprising:
- at least one server in communication with a wide area network (WAN);
- a mobile device in communication with said at least one server via said WAN, said mobile device comprising: at least one processor for downloading machine readable instructions from said at least one server; and at least one memory device for storing said machine readable instructions that are adapted to perform the step of: receiving first biometric data from said user, said first biometric data being acquire using at least one sensor and analyzed to infer a first state at a time that said first biometric data is received; receiving self-reporting data from said user, said first self-reporting data being received immediately after said first biometric data is received and allowing said user to provide the current state, thereby indicating whether said inferred state is correct; receiving a request from said user to perform an action, said request being received after said self-reporting data has been received; and wherein at least said first biometric data, said request, and said self-reporting data are used to determine a second state at a time that said request is received; wherein at least one action is performed after said second state is determined and in response to said request, said at least one action (i) being based on said second state, (ii) being said requested action if said second state is normal and therefore consistent with said performance of said requested action, and (iii) being a second action if said second state is abnormal and therefore inconsistent with said performance of said requested action, said second action including then notifying said user of (i) said second state and (ii) an alternate action that could be performed instead of said requested action, wherein said alternate action is an unrequested action and different from said requested action, said alternate action being performed if said user indicates in the affirmative that said alternate action should be performed, otherwise, in the absence of said affirmative indication, performing said requested action.
28. The system of claim 27, wherein using at least said first biometric data, said result, and said self-reporting data to determine said second state further comprises (i) using said self-reporting data to check whether said first inferred state should be adjusted, and (ii) using at least the same and said result to determine said second state.
20050289582 | December 29, 2005 | Tavares |
20060293921 | December 28, 2006 | McCarthy |
20080065468 | March 13, 2008 | Berg |
20100174586 | July 8, 2010 | Berg, Jr. |
20140089399 | March 27, 2014 | Chun |
20140130076 | May 8, 2014 | Moore |
20140347265 | November 27, 2014 | Aimone |
20150178915 | June 25, 2015 | Chatterjee |
20150297109 | October 22, 2015 | Garten |
20190061772 | February 28, 2019 | Prinz |
WO-2015048338 | April 2015 | WO |
Type: Grant
Filed: Sep 3, 2016
Date of Patent: Dec 22, 2020
Patent Publication Number: 20170068994
Inventors: Robin S Slomkowski (Eugene, OR), Richard A Rothschild (London)
Primary Examiner: John Van Bramer
Application Number: 15/256,543
International Classification: G06Q 30/02 (20120101); G06K 9/00 (20060101); G06F 16/435 (20190101); H04L 29/08 (20060101);