Sensor Fusion Quality Of Data Determination

An unknown state value in a structure neuron value in a neural network, in one embodiment, is determined by using the difference between known values and output at an equivalent model location. The accuracy of model produced values with known values are determined compared to the known values. How much the known model produced locations were used to determine the unknown state value is determined. These amounts and accuracy of the model produced values are used to determine accuracy of the model produced value of the unknown state value.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
RELATED APPLICATIONS

The present application hereby incorporates by reference the entirety of, and claims priority to, U.S. provisional patent application Ser. No. 62/704,976, filed Jun. 5, 2020.

FIELD

The present disclosure relates to neural network methods for describing system topologies. More specifically the present disclosure relates to determining unknown values in a model neuron, determining where in the model those neuron values were derived from, and determining the quality of data within the model representation compared to a physical location.

BACKGROUND

Data fusion is combining disparate data sets to pull the magic trick of seeming to get more information out than was put in. More specifically, it entails combining data from different sources and analyzing it such that the different data sets and data views allow one to more fully understand what is being observed than any single data set allows.

Building models often have sparse data sets; there are only so many sensors in a building; large spaces generally only measure temperature (or other state values) near walls. This leads to buildings whose heating and cooling is very difficult to control, as large portions of the building do not have simple ways of measuring state values, and if they are not measured, it is very difficult to alter them. Who has not been in an office where the thermostat is next door? No matter the temperature of your office, since it is not measured with a sensor, you are at the mercy of the person the next office over, and their desired temperature setting.

SUMMARY

This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description section. This summary does not identify required or essential features of the claimed subject matter.

In embodiments, a method for computing neuron accuracy implemented by one or more computers is disclosed, comprising: running a neural network with test neurons and a target neuron using known sensor values at test neurons for a cost function to produce modeled test neuron values and a modeled value of the target neuron; comparing modeled test values to known sensor values, to determine quality of test neuron values; calculating connection strengths of each test value relative to the target neuron; and calculating accuracy of the target neuron using: quality of the test neuron values, and connection strengths between the target neuron and the test neurons.

In embodiments, running the neural network comprises using state time series values as input into the neural network for a running period.

In embodiments, the state time series values are weather values affecting a controlled space.

In embodiments, the cost function compares the known sensor values to the modeled test values.

In embodiments, calculating connection strength comprises using automatic differentiated vector gradients.

In embodiments, calculating accuracy of the target neuron comprises matrix multiplying the quality of test neuron values by connection strengths between the target neuron and the test neurons.

In embodiments, running the neural network comprises using machine learning techniques to determine connection strengths between the target neuron and the test neurons comprises using automatic differentiation to backpropagate from the target neuron to the test neurons.

In embodiments, the neural network is a heterogenous neural network.

In embodiments, at least one test neuron has an accuracy and an associated sensor, and where the test neuron accuracy relates to accuracy of the associated sensor.

In embodiments, the neural network has internal values, and further comprising warming up the neural network using at least a portion of an initial state time series values to modify the neural network internal values.

In embodiments, the neural network is warmed up by pre-running the neural network using successively larger portions of an input wave form until a goal state is reached.

In embodiments, the neural network models a controlled system, and wherein the controlled system comprises a controlled building system, a process control system, an HVAC system, an energy system, or an irrigation system.

In embodiments, a system for computing neuron accuracy is disclosed, comprising: a processor; a memory in operational communication with the processor; a neural network which resides at least partially in the memory, the neural network comprising test neurons with test values and at least one target neuron with a target neuron value; a neural network optimizer that optimizes the neural network using known sensor values and test values for a cost function to produce a solved neural network with modeled test values; a determiner that determines quality of the test neuron values by comparing test neuron values in the solved neural network to corresponding actual values; a machine learner that uses machine learning techniques to calculate connection strengths between the test neurons and the at least one target neuron; and a function calculator that calculates accuracy of the at least one target neuron value using: quality of the test neuron values, and connection strengths between the target neurons and the at least one test neuron.

In embodiments, the function calculator comprises matrix multiplying the quality of test neuron values by connection strengths between the target neuron and the test neurons.

In embodiments, at least one corresponding actual value comprises a sensor state value.

In embodiments, the sensor state value is derived from a sensor in a controlled space.

In embodiments, an initializer is disclosed, which uses state time series values as input into the neural network for a running period.

In embodiments, at least one of the machine learning techniques uses automatic differentiation to calculate connection strengths.

In embodiments, a computer-readable storage medium configured with data and instructions is disclosed, which upon execution by a processor perform a method for computing neuron accuracy, the method comprising: initializing values for at least some test neurons in a neural network, the test neurons representing corresponding actual values; specifying a target neuron in the neural network; optimizing the neural network using the actual values producing a solved neural network with a target neuron value and test neuron values; using machine learning techniques to determine connection strengths between the target neuron and the test neurons; determining quality of the test neuron values by comparing test neuron values in the solved neural network to corresponding actual neuron values; and calculating accuracy of the target neuron using: quality of the test neuron values, and connection strengths between the target neuron and the at least one test neuron.

In embodiments, the corresponding actual values are sensor values that correspond to test neuron locations.

Additional features and advantages will become apparent from the following detailed description of illustrated embodiments, which proceeds with reference to accompanying drawings.

BRIEF DESCRIPTION OF THE FIGURES

Non-limiting and non-exhaustive embodiments of the present embodiments are described with reference to the following FIGURES, wherein like reference numerals refer to like parts throughout the various views unless otherwise specified.

FIG. 1 depicts a computing system in accordance with one or more embodiments.

FIG. 2 depicts a distributed computing system in accordance with one or more embodiments.

FIG. 2A depicts an exemplary system configured to determine quality of data using sensor fusion accordance with one or more embodiments.

FIG. 3 depicts an exemplary system configured to determine quality of data using sensor fusion accordance with one or more embodiments.

FIG. 4 is a functional block diagram that illustrates an exemplary compute function with which described embodiments can be implemented.

FIG. 5 is a diagram showing an exemplary sensor fusion and quality of data neural network system in conjunction with which described embodiments can be implemented.

FIG. 6 is a diagram showing an exemplary neural network sensor fusion and quality of data system with computed neurons in conjunction with which described embodiments can be implemented.

FIG. 7 is a diagram showing an exemplary neural network sensor fusion and quality of data system with component vector propagation in conjunction with which described embodiments can be implemented.

FIG. 8 is a diagram showing an exemplary neural network sensor fusion and quality of data system with fused data computation in conjunction with which described embodiments can be implemented.

FIG. 9 is a table showing an exemplary quality of data computation method in conjunction with which described embodiments can be implemented.

FIG. 10 is a block diagram showing types of neural networks with which described embodiments can be implemented.

FIG. 11 is a diagram showing data streams with which described embodiments can be implemented.

FIG. 12 is a diagram showing an exemplary time series data with which described embodiments can be implemented.

Corresponding reference characters indicate corresponding components throughout the several views of the drawings. Skilled artisans will appreciate that elements in the FIGURES are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of various embodiments. Also, common but well-understood elements that are useful or necessary in a commercially feasible embodiment are often not depicted in order to facilitate a less obstructed view of these various embodiments.

DETAILED DESCRIPTION

Disclosed below are representative embodiments of methods, computer-readable media, and systems having particular applicability to systems and methods for building neural networks that describe physical structures. Described embodiments implement one or more of the described technologies.

In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present embodiments. It will be apparent, however, to one having ordinary skill in the art that the specific detail need not be employed to practice the present embodiments. In other instances, well-known materials or methods have not been described in detail in order to avoid obscuring the present embodiments.

Reference throughout this specification to “one embodiment”, “an embodiment”, “one example”, or “an example” means that a particular feature, structure or characteristic described in connection with the embodiment or example is included in at least one embodiment of the present embodiments. Thus, appearances of the phrases “in one embodiment”, “in an embodiment”, “one example” or “an example” in various places throughout this specification are not necessarily all referring to the same embodiment or example. Furthermore, the particular features, structures or characteristics may be combined in any suitable combinations and/or sub-combinations in one or more embodiments or examples.

Embodiments in accordance with the present embodiments may be implemented as an apparatus, method, or computer program product. Accordingly, the present embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.), or an embodiment combining software and hardware aspects. Furthermore, the present embodiments may take the form of a computer program product embodied in any tangible medium of expression having computer-usable program code embodied in the medium.

Any combination of one or more computer-usable or computer-readable media may be utilized. For example, a computer-readable medium may include one or more of a portable computer diskette, a hard disk, a random access memory (RAM) device, a read-only memory (ROM) device, an erasable programmable read-only memory (EPROM or Flash memory) device, a portable compact disc read-only memory (CDROM), an optical storage device, and a magnetic storage device. Computer program code for carrying out operations of the present embodiments may be written in any combination of one or more programming languages.

Embodiments may be implemented in edge computing environments where the computing is done within a network which, in some implementations, may not be connected to an outside internet, although the edge computing environment may be connected with an internal internet. In these implementations the space is much safer, and is much easier to secure from ransomeware attacks and the like. This internet may be wired, wireless, or a combination of both. Embodiments may also be implemented in cloud computing environments. A cloud model can be composed of various characteristics (e.g., on-demand self-service, broad network access, resource pooling, rapid elasticity, measured service, etc.), service models (e.g., Software as a Service (“SaaS”), Platform as a Service (“PaaS”), Infrastructure as a Service (“IaaS”), and deployment models (e.g., private cloud, community cloud, public cloud, hybrid cloud, etc.).

The flowchart and block diagrams in the flow diagrams illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present embodiments. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It will also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, may be implemented by general or special purpose hardware-based systems that perform the specified functions or acts, or combinations of general and special purpose hardware and computer instructions. These computer program instructions may also be stored in a computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.

As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having,” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such process, article, or apparatus.

Further, unless expressly stated to the contrary, “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).

Additionally, any examples or illustrations given herein are not to be regarded in any way as restrictions on, limits to, or express definitions of any term or terms with which they are utilized. Instead, these examples or illustrations are to be regarded as being described with respect to one particular embodiment and as being illustrative only. Those of ordinary skill in the art will appreciate that any term or terms with which these examples or illustrations are utilized will encompass other embodiments which may or may not be given therewith or elsewhere in the specification and all such embodiments are intended to be included within the scope of that term or terms. Language designating such non-limiting examples and illustrations includes, but is not limited to: “for example,” “for instance,” “e.g.,” and “in one embodiment.”

Various alternatives to the implementations described herein are possible. For example, embodiments described with reference to flowchart diagrams can be altered, such as, for example, by changing the ordering of stages shown in the flowcharts, or by repeating or omitting certain stages.

I. Overview

Deep physics networks are structured similarly, but not identically, to neural networks. Unlike the homogeneous activation functions of neural nets, each neuron comprises unique physical characteristics representing functions in a thermodynamic system. Once configured, known sensors values are fed into their corresponding neurons in the network. Once the network is trained, any location in the thermodynamic system can be introspected to extract fused data. The process provides powerful generalized data fusion, data synthesis, and quality assessment through inference even where no sensors exist—for any thermodynamic system. The same mechanism enables model optimization, and the time series can then be used for real-time sequence generation and fault detection.

In an exemplary environment, a neuron model system comprises heterogenous neural networks. with activation functions that comprise neurons that represent individual material layers of a building and various values, such as their resistance and capacitance. These neurons are formed into parallel and branchless neural network strings that propagate heat (or other state values) through them. FIG. 1 at 100 shows an exemplary embodiment overview that can be used to create and discretize such a neuron model system. This neuron model system may be described as a thermodynamic model. In a building or in a model of a building, a missing sensor value can be determined, within an error range. Existing sensors can be checked to determine how accurate they are, within an error range. Areas that do not have specific sensors can have their sensor values determined within an error range. This makes modifying state much easier, as changes in the system can be checked, and thus be determined if such changes had the desired effect. This can able be instrumental during commissioning, as a much more thorough state of a controlled space can be determined. Other benefits will be obvious to those of skill in the art.

II. Exemplary Computing Environment

FIG. 1 illustrates a generalized example of a suitable computing environment 100 in which described embodiments may be implemented. The computing environment 100 is not intended to suggest any limitation as to scope of use or functionality of the disclosure, as the present disclosure may be implemented in diverse general-purpose or special-purpose computing environments.

With reference to FIG. 1, the core processing is indicated by the core processing unit 130. The computing environment 100 includes at least one central processing unit 110 and memory 120. The central processing unit 110 executes computer-executable instructions and may be a real or a virtual processor. It may also comprise a vector processor 112, which allows same-length neuron strings to be processed rapidly. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power and as such the vector processor 112, GPU 115, and CPU can be running simultaneously. The memory 120 may be volatile memory (e.g., registers, cache, RAM), non-volatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two. The memory 120 stores software 185 implementing the described methods and systems of sensor fusion quality of data determination.

A computing environment may have additional features. For example, the computing environment 100 includes storage 140, one or more input devices 150, one or more output devices 155, one or more network connections (e.g., wired, wireless, etc.) 160 as well as other communication connections 170. An interconnection mechanism (not shown) such as a bus, controller, or network interconnects the components of the computing environment 100. Typically, operating system software (not shown) provides an operating environment for other software executing in the computing environment 100, and coordinates activities of the components of the computing environment 100. The computing system may also be distributed; running portions of the software 185 on different CPUs.

The storage 140 may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, CD-RWs, DVDs, flash drives, or any other medium which can be used to store information and which can be accessed within the computing environment 100. The storage 140 stores instructions for the software, such as software 185 to implement methods of sensor fusion utilizing neural networks.

The input device(s) 150 may be a device that allows a user or another device to communicate with the computing environment 100, such as a touch input device such as a keyboard, video camera, a microphone, mouse, pen, or trackball, and a scanning device, touchscreen, or another device that provides input to the computing environment 100. For audio, the input device(s) 150 may be a sound card or similar device that accepts audio input in analog or digital form, or a CD-ROM reader that provides audio samples to the computing environment. The output device(s) 155 may be a display, printer, speaker, CD-writer, or another device that provides output from the computing environment 100.

The communication connection(s) 170 enable communication over a communication medium to another computing entity. The communication medium conveys information such as computer-executable instructions, compressed graphics information, or other data in a modulated data signal. Communication connections 170 may comprise input devices 150, output devices 155, and input/output devices that allows a client device to communicate with another device over network 160. A communication device may include one or more wireless transceivers for performing wireless communication and/or one or more communication ports for performing wired communication. These connections may include network connections, which may be a wired or wireless network such as the Internet, an intranet, a LAN, a WAN, a cellular network or another type of network. It will be understood that network 160 may be a combination of multiple different kinds of wired or wireless networks. The network 160 may be a distributed network, with multiple computers, which might be building controllers, acting in tandem. A computing connection 170 may be a portable communications device such as a wireless handheld device, a cell phone device, and so on.

Computer-readable media are any available non-transient tangible media that can be accessed within a computing environment. By way of example, and not limitation, with the computing environment 100, computer-readable media include memory 120, storage 140, communication media, and combinations of any of the above. Computer readable storage media 165 which may be used to store computer readable media comprises instructions 175 and data 180. Data Sources may be computing devices, such as general hardware platform servers configured to receive and transmit information over the communications connections 170. The computing environment 100 may be an electrical controller that is directly connected to various resources, such as HVAC resources, and which has CPU 110, a GPU 115, Memory, 120, input devices 150, communication connections 170, and/or other features shown in the computing environment 100. The computing environment 100 may be a series of distributed computers. These distributed computers may comprise a series of connected electrical controllers.

Although the operations of some of the disclosed methods are described in a particular, sequential order for convenient presentation, it should be understood that this manner of description encompasses rearrangement, unless a particular ordering is required by specific language set forth below. For example, operations described sequentially can be rearranged or performed concurrently. Moreover, for the sake of simplicity, the attached figures may not show the various ways in which the disclosed methods, apparatus, and systems can be used in conjunction with other methods, apparatus, and systems. Additionally, the description sometimes uses terms like “determine,” “build,” and “identify” to describe the disclosed technology. These terms are high-level abstractions of the actual operations that are performed. The actual operations that correspond to these terms will vary depending on the particular implementation and are readily discernible by one of ordinary skill in the art.

Further, data produced from any of the disclosed methods can be created, updated, or stored on tangible computer-readable media (e.g., tangible computer-readable media, such as one or more CDs, volatile memory components (such as DRAM or SRAM), or nonvolatile memory components (such as hard drives) using a variety of different data structures or formats. Such data can be created or updated at a local computer or over a network (e.g., by a server computer), or stored and accessed in a cloud computing environment.

FIG. 2 depicts a distributed computing system 200 with which embodiments disclosed herein may be implemented. Two or more computerized controllers 205 may incorporate all or part of a computing environment 100, 210. These computerized controllers 205 may be connected 215 to each other using wired or wireless connections. The controllers may be within a controlled space 220. A controlled space 220 may be a space that has a resource, sensor, or other equipment that can modify or determine one or more states state of the space, such as a a sensor (to determine space state), a heater, an air conditioner (to modify temperature); a speaker (to modify noise), locks, lights, etc. A controlled space may be divided into zones, which might have a sensor, or no sensor. Controlled spaces might be, e.g., an automated building, a process control system, an HVAC system, an energy system, an irrigation system, a building—irrigation system, etc. Computerized controllers 205 may comprise a distributed system that can run without using connections (such as internet connections) outside of the computing system 200 itself. This allows the system to run with low latency, and with other benefits of edge computing systems. The system may also run without access to an outside internet connection, which may make the system much more invulnerable to outside security threats.

III. Exemplary System Embodiments

FIG. 2A depicts an exemplary room-sensor-neural network system 200A brief overview that can be used to perform sensor fusion. The system 200A is not intended to suggest any limitation as to scope of use or functionality of the disclosure, as the present disclosure may be implemented in many different systems. In this simplified environment, there are three rooms (Room 1 205A, Room 2 210A, and room 3 215A). Two of the rooms (Room 1 205A and Room 2 210A) have state sensors (sensor 1 220A and sensor 2 225A), while Room 3 215A does not. The neural network will fuse the sensor values from sensor 1 220A and sensor 2 225A to determine a probable sensor value for Room 3 215A. The neural network 230A, among other neurons, has a neuron 235A that represents sensor 1 220A (and may be considered associated with sensor 1 in that the Sensor 1 220A value is represented in the neural network by neuron 235A) and a neuron 240A that represents sensor 2 225A. These may be called test neurons. The neural network 230A also has a neuron 245A that represents the sensor value for the nonexistent sensor in Room 3 215A. This may be called a target neuron. The values for sensor 1 220A and sensor 2 225A may be collected for a period of time. The neural network 245A may then be solved for the collected values of sensor 1 220A and sensor 2 225A for neurons 235A and 240A (the test neurons). The value in neuron 245A (the target neuron) may be considered the first pass at a state value in Room 3 215A. For example, if the known sensors were temperature, the value in the neuron 245A would be a temperature value. The solved known state values are then compared to the actual sensor values to determine how good the test values are. The degree each test neuron was used to determine the value of the target neuron is determined. Then, a combination of the test values, how close the test values are to the actual values, and the percent of the test values used to determine the target value are used to determine a final, fused, test value.

FIG. 3 depicts an exemplary system 300 for fusing sensor data. The system may include at least one processor 310 residing within a controller 307, which may be part of a computerized controller system 200. The controller 307 may be in a controlled space 305. Memory 312 may comprise a neural network 315. In some embodiments, the neural network may reside partially in memory. In some embodiments, the neural network may thermodynamically model a controlled space, e.g., 220. This neural network may thermodynamically represent the controlled space in some way. It may represent the controlled space 220 as a single space, or may break the controlled space up into different zones, which thermodynamically effect each other. The neural network 315 may comprise target neurons 320 and at least one test neuron 325 that may represent individual material layers of a physical space and how they change state, e.g., their resistance, capacitance, and/or other values that describe how state flows though the section of the controlled space 220 that is being modeled. In some embodiments, other neural structures are used. In some embodiments, structure models other than neural networks are used. There may also be a target neuron value 330 and a test neuron value 335 that represent some state stored in the target 330 and/or test neuron 335.

An initializer 340 may be included that initializes a neural network. Such an initializer is described in patent application Ser. No. 17/308,294, filed May 5, 2021, and incorporated by reference in its entirety. A neural network optimizer 345 may be included that optimizes the neural network using known sensor values and test values for a cost function to produce modeled test values. A determiner 355 determines quality of the test neuron values by comparing test neuron values in the solved trained neural network to corresponding actual values, which may be known sensor values. A machine learner 360 uses machine learning techniques to calculate connection strengths between the target neurons and the at least one test neuron. A function calculator 365 calculates accuracy of the target neuron using quality of the test neuron values, and connection strengths between the target neuron and at least one test neuron. Known sensor values may be used in the function calculator to calculate the accuracy of the test values.

IV. Exemplary Method Embodiments

FIG. 4 depicts an exemplary method 400 for initializing neural networks. The operations of method 400 presented below are intended to be illustrative. In some embodiments, method 400 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method 400 are illustrated in FIG. 4 and described below is not intended to be limiting.

In some embodiments, method 400 may be implemented in one or more processing devices, such as shown with reference to FIGS. 1 and 2. The one or more processing devices may include one or more devices executing some or all of the operations of method 400 in response to instructions stored electronically on an electronic storage medium. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 400.

In some embodiments, the neural network may reside partially in memory. In some embodiments, the neural network may thermodynamically model a controlled space, e.g., 220. It may represent the controlled space 220 as a single space, or may break the controlled space up into different zones, which thermodynamically effect each other. The neural network 315 may comprise neurons 320 that represent individual material layers of a physical space and how they change state, e.g., their resistance, capacitance, and/or other values that describe how state flows though the section of the controlled space 220 that is being modeled. In some neural networks 315, neurons 320 (which may represent material layers) are formed into parallel and branchless neural network strings that propagate heat (and/or other state values) through them. In some embodiments, other neural structures are used. In some embodiments, structure models other than neural networks are used. More information on neural networks can be found with reference to patent app. Ser. No. 17/143,796, filed on Jan. 7, 2021, and hereby incorporated by reference in its entirety.

At operation 405, values are initialized for at least some test neurons in a neural network representing corresponding actual values 305. This may entail gathering known sensor data. This sensor data (e.g., sensor 1 220A and sensor 2 225A) may be from locations in a controlled space (e.g. Rooms 1 205A and Room 2 210A). More information is given with reference to FIG. 11.

At operation 410, a target neuron is specified. This may be a neuron that represents a building zone that does not have a sensor in it, whose value we are looking for, as shown in FIG. 2A at 215A. Here, the test neuron is the neuron 245A in the neural network 230A that represents the location (e.g., Room 3 215A) where information is not known

At operation 415, a neural network is run using known sensor values at test neurons for a cost function to produce modeled test neuron values and a modeled target neuron value. This comprises running the neural net and propagating the test neuron state (e.g., temperature, humidity, etc.) to neurons in the network. The neural network cost function measures the difference between the known sensor data and the simulated sensor data at the test neuron locations. After the model has run to a state with sufficient accuracy (e.g., the cost function is at a desired value indicating that the neural network is producing results where the test neurons are within a certain value of the known sensor values), the target neuron and the test neurons (those neurons that represent zones the building being modeled with actual sensors) all have simulated values. Even though it is called a cost function here, it is intended to be synonymous with “error function” and “loss function.” The cost function produces a cost, which may be a value, a vector, or something else, that shows the difference between the target neuron values and the sensor values.

FIG. 5 discloses a simple thermodynamic model with neurons 500 that are trained for data fusion. Network neurons 530 (checked) have been connected to test neurons that represent building sensors (with values 21.0 505, 25.0 510, 29.0 515, and 19.0 520). The target neuron 525 has been given the value 26.0 by the neural network model.

At operation 420, modeled test values are compared to to known sensor values, to determine quality of test neuron values. Once the network has been trained, producing fused data, the Quality of Data is assessed for each connected virtual node relative to each sensor that is connected to that node. The result is as follows: if corresponding deep physics node has a computed error of 0.9%, then its QOD is assigned as 99.1%; the 1—the computed error. To describe in more detail, and with reference to FIG. 5, let us assume the known sensor values are 21.84 (for neuron 505), 25.25 (for neuron 510), 30.45 (for neuron 515), and 19.57 (for neuron 520). FIG. 6 at 600 discloses a computation of the quality of the simulated data, by percent, relative to the neuron's sensor points. Neuron value 505 is 96% accurate 605, when compared to the actual sensor that it is being compared to. Neuron 510 has an accuracy of 99% 610, neuron 515 95% as shown at 615, and 520 has an accuracy of 97% a shown at 620.

At operation 425 connection strengths of each test value relative to the target neuron is calculated. That is, how much of the test value was used to determine the target value. This may be calculated by using the component vector gradient from the test nodes to the target node, by using the weights (e.g., the cumulative weights) from the test nodes to the target node, etc. These connection strengths may also be determined by determining values by backpropagating from the target neuron to the test neurons. FIG. 7 at 700 shows a neural network with component contribution 705 of the sensors relative to the target data fusion node 710. The connection strengths found are subtracted from 100% to give the Quality of Data (QoD) values. Those of skill in the art will be aware of other methods as well. The fused node 710 QoD value may be computed by matrix multiplication of the list of sensor originating node's QoD values by each of these component weights. This allows the target neuron to calculate which percentage of its data values were derived from the test sensors. FIG. 8 at 800 discloses a neural net matrix with the computed value for the quality of data of the test node. Those percentages after subtraction from 100% are 95%, 96%, 99%, and 97%.

At operation 430 accuracy of the target neuron is calculated using the quality of the test neuron values, and connection strengths between the target neuron and the test neurons. Once the connection strengths are known, the target neuron accuracy can be determined by matrix multiplying the quality of test neuron values by connection strengths between the target neuron and the test neurons. Here, matrix multiplication works just like the dot product.

FIG. 9 at 900 discloses an example of such a method to determine target neuron accuracy. The connection strengths are listed in the table as vector components 910. The accuracy of each test neuron 905 is matrix multiplied by the connection strength 910 of the test neurons, giving a result 915 for each test neuron, which are summed together to give the final value of 96.3, which is the target neuron accuracy, as seen in FIG. 8 at 805.

FIG. 10 is a block diagram 1000 showing types of neural networks 1005 with which described embodiments can be implemented. In some implementations a heterogenous neural network 1010 may be used at the neural network descried herein. The neural network 1005 may be composed of neurons that model thermodynamic characteristics of a controlled space 220, 305 using state transfer nodes with physics equations in activation functions that determine how state transfers between and/or through the various structure and pieces of structures (e.g., windows, floors, ceilings, air) and values that specify parameters for specific structures; i.e., an inner wall with no insulation will behave differently than an outer wall with considerable insulation. State enters one or more neurons and then propagates throughout the neuron structure. The neurons are branchless and parallel, allowing fast processing on vector processing machines. Automatic differentiation may be used to calculate backward propagation through the neural network. Heterogenous neural networks are described in U.S. Utility patent application Ser. No. 17/143,796, filed on Jan. 7, 2021, and hereby incorporated by reference in its entirety.

FIG. 11 is a diagram 1100 showing collecting sensor data and running a neural network with which described embodiments can be implemented. A controlled space 1120 may have a sensor in it 1125 The controlled space may be subject to weather (or other state) 1105 for a time period t(n) to t(0) 1115 which may be saved as a state time series values (temperature, for example, over time.) The sensor 1125 may have data 1130 gathered for the same time 1125. The state curve 1110 may be used as input into a neural network 315. A sensor value collected during this time 1125 (such as the last value at time t(0)) may be used as the target neuron value 330.

FIG. 12 at 1200 discloses diagram that illustrates a time-series that can be used to warm up the the neural network by pre-running the neural network using successively larger portions of an input wave form until a goal state is reached. The idea of warming up a neural network is, generally, that the neural network has neurons that represent various bits of a controlled space, such as walls and rooms, or bigger or smaller bits. These neurons may start out with values that do not represent actual values in a structure, such as all the temperature values (or values that are used to derive temperatures) are at 0. This does not represent the actual values in the controlled space 1120 that is bring modeled. To bring the neural network up to a reasonable state value, state values may be propagated forward through the network.

In more detail, neural network 315 may represent some controlled space 1120. This controlled space 1120 may have a sensor, e.g., 125 that records state of the space 1120. State that affects the space 1120, such as a weather value 1105, may be gathered (e.g., from t(n) to t(0) 1115), producing a state curve, during the same time that data is being collected from a sensor 1125. A weather value may be a state value that can be derived from the weather affecting a controlled space, such as temperature, humidity, wind speed, cloudiness, dew point, etc. The neural network my be run with this state data 1115 as input to give the neural network interior values reasonable starting values before the neural network is run to determine sensor fusion values to train the neural network. When a variable in the neural network representing the controlled space 1120 with the sensor 1125 matches the sensor data (or hits a threshold or comes within a certain value of a threshold value, etc.), at t(0), the neural network may be considered to be warmed up. A threshold may be the magnitude or intensity that must be exceeded for a certain reaction, phenomenon, result, or condition to occur or be manifested, as it is commonly defined.

At 1200, exemplary time series data is shown, with the timesteps running from t(n) 1235 to t(0) 1205. Initially, a a set of the time series data may be chosen. Then, a set of time series data may be chosen (e.g., from k(index) to 0). The time series data may be divided into x sections, each section with some number of timesteps. In some embodiments, each section may have the same number, e.g., k, timesteps 1210. In some embodiments, the data runs from a value within the time series to the last value taken, t(0) 1205. In some embodiments, the data may have a different ending point, or in a different direction. The first time a neural network is run, the time series data may be run from k 1220 to 0 1210. If a goal state is not reached, the second time the neural network is run, it my be run from k(2) 1225 to 0 1215, up to k(x) 1230. In some embodiments, there may be a variable number of timesteps per section.

The chosen time series data is propagated through the neural network 315. This may be done using a neural network optimizer 345 or through a different method. Then, the value of a neuron variable may be determined. It then may be determined if the goal state has been reached. The goal state may comprise the neuron variable value reaching a threshold value or similar, an internal neuron value reaching a state (such as temperature) within some range of a sensor 1125 in a controlled space, an index value being greater than x, reaching the limit of the time series data, e.g., 1230, reaching a neural network running time limit, or reaching an error state.

If the stopping state has been reached, in some embodiments, the program stops and the neural network may be considered trained. If the goal state has not been reached, then another set of time series data may be chosen (e.g., k(index+1) 1220, and the process continues.

V. Exemplary Computer-Readable Medium that Comprises Sensor Fusion Quality of Data Determination

With reference to FIGS. 1, 2 and 3, some embodiments include a configured computer-readable storage medium 165. Medium 165 may include disks (magnetic, optical, or otherwise), RAM, EEPROMS or other ROMs, and/or other configurable memory, including computer-readable media (not directed to a manufactured transient phenomenon, such as an electrical, optical, or acoustical signal). The storage medium which is configured may be a removable storage medium 165 such as a CD, DVD, or flash memory. A general-purpose memory (which may be primary, such as RAM, ROM, CMOS, or flash; or may be secondary, such as a CD, a hard drive, an optical disk, or a removable flash drive), can be configured into an embodiment using the computing environment 100, the computerized controller 205, or the controller 307, or any combination of the above, in the form of data 180 and instructions 175, read from a source, such as a removable medium output device 155, to form a configured medium with data and instructions which upon execution by a processor perform a method for computing neuron accuracy,. The configured medium 165 is capable of causing a computer system to perform actions as related herein.

Some embodiments provide or utilize a computer-readable storage medium 165 configured with software 185 which upon execution by at least a central processing unit 110 performs methods and systems described herein.

In view of the many possible embodiments to which the principles of the disclosed invention may be applied, it should be recognized that the illustrated embodiments are only preferred examples of the invention and should not be taken as limiting the scope of the invention. Rather, the scope of the invention is defined by the following claims. We therefore claim as our invention all that comes within the scope and spirit of these claims.

Claims

1. A method for computing neuron accuracy implemented by one or more computers comprising:

running a neural network with test neurons and a target neuron using known sensor values at test neurons for a cost function to produce modeled test neuron values and a modeled value of the target neuron;
comparing modeled test values to known sensor values, to determine quality of test neuron values;
calculating connection strengths of each test value relative to the target neuron; and
calculating accuracy of the target neuron using: quality of the test neuron values, and connection strengths between the target neuron and the test neurons.

2. The method of claim 1, wherein running the neural network comprises using state time series values as input into the neural network for a running period.

3. The method of claim 2, wherein the state time series values are weather values affecting a controlled space.

4. The method of claim 3, wherein the cost function compares the known sensor values to the modeled test values.

5. The method of claim 4, wherein calculating connection strength comprises using automatic differentiated vector gradients.

6. The method of claim 5, wherein calculating accuracy of the target neuron comprises matrix multiplying the quality of test neuron values by connection strengths between the target neuron and the test neurons. (be sure to mention that matrix multiplication works just like the dot product here.)

7. The method of claim 1, wherein running the neural network comprises using machine learning techniques to determine connection strengths between the target neuron and the test neurons comprises using automatic differentiation to backpropagate from the target neuron to the test neurons.

8. The method of claim 1, wherein the neural network is a heterogenous neural network.

9. The method of claim 1, wherein at least one test neuron has an accuracy and an associated sensor, and where the test neuron accuracy relates to accuracy of the associated sensor.

10. The method of claim 1, wherein the neural network has internal values, and further comprising warming up the neural network using at least a portion of an initial state time series values to modify the neural network internal values.

11. The method of claim 10, further comprising warming up the the neural network by pre-running the neural network using successively larger portions of an input wave form until a goal state is reached.

12. The method of claim 1, wherein the neural network models a controlled system, and wherein the controlled system comprises a controlled building system, a process control system, an HVAC system, an energy system, or an irrigation system.

13. A system for computing neuron accuracy comprising: a processor; a memory in operational communication with the processor;

a neural network which resides at least partially in the memory, the neural network comprising test neurons with test values and at least one target neuron with a target neuron value;
a neural network optimizer that optimizes the neural network using known sensor values and test values for a cost function to produce a solved neural network with modeled test values;
a determiner that determines quality of the test neuron values by comparing test neuron values in the solved neural network to corresponding actual values;
a machine learner that uses machine learning techniques to calculate connection strengths between the test neurons and the at least one target neuron; and
a function calculator that calculates accuracy of the at least one target neuron value using: quality of the test neuron values, and connection strengths between the target neurons and the at least one test neuron.

14. The system of claim 13, wherein the function calculator comprises matrix multiplying the quality of test neuron values by connection strengths between the target neuron and the test neurons.

15. The system of claim 13, wherein at least one corresponding actual value comprises a sensor state value.

16. The system of claim 15, wherein the sensor state value is derived from a sensor in a controlled space.

17. The system of claim 16, further comprises an initializer, which uses state time series values as input into the neural network for a running period.

18. The system of claim 17, wherein at least one of the machine learning techniques uses automatic differentiation to calculate connection strengths.

19. A computer-readable storage medium configured with data and instructions which upon execution by a processor perform a method for computing neuron accuracy, the method comprising: initializing values for at least some test neurons in a neural network, the test neurons representing corresponding actual values;

specifying a target neuron in the neural network;
optimizing the neural network using the actual values producing a solved neural network with a target neuron value and test neuron values;
using machine learning techniques to determine connection strengths between the target neuron and the test neurons;
determining quality of the test neuron values by comparing test neuron values in the solved neural network to corresponding actual neuron values; and
calculating accuracy of the target neuron using: quality of the test neuron values, and connection strengths between the target neuron and the at least one test neuron.

20. The computer-readable storage medium of claim 19, wherein the corresponding actual values are sensor values that correspond to test neuron locations.

Patent History
Publication number: 20210383236
Type: Application
Filed: Jun 2, 2021
Publication Date: Dec 9, 2021
Inventors: Troy Aaron Harvey (Brighton, UT), Jeremy David Fillingim (Salt Lake City, UT)
Application Number: 17/336,640
Classifications
International Classification: G06N 3/08 (20060101); G06N 3/04 (20060101); G06F 17/16 (20060101); G05B 13/02 (20060101);