ENERGY EFFICIENT FORWARDING IN AD-HOC WIRELESS NETWORKS

A system for conserving energy in a multi-node network (110) includes nodes (205) configured to organize themselves into tiers (305, 310,315). The nodes (205) are further configured to produce a transmit/receive schedule at a first tier (310) in the network (110) and control the powering-on and powering-off of transmitters and receivers in nodes (205) in a tier adjacent (315) to the first tier (310) according to the transmit/receive schedule.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

The present application is a continuation of U.S. patent application Ser. No. 12/253,130 filed Oct. 16, 2008, which, in turn, is a continuation of U.S. patent application Ser. No. 12/174,512 filed Jul. 16, 2008, which, in turn, is a continuation of U.S. patent application Ser. No. 10/328,566 filed Dec. 23, 2002, now U.S. Pat. No. 7,421,257, which, in turn, is a continuation-in-part of U.S. patent application Ser. No. 09/998,946 filed Nov. 30, 2001, now U.S. Pat. No. 7,020,501, the entire contents of all of which are incorporated herein by reference.

GOVERNMENT CONTRACT

The U.S. Government has a paid-up license in this invention and the right in limited circumstances to require the patent owner to license others on reasonable terms as provided for by the terms of Contract No. 2000-DT-CX-K001, awarded by the Department of Justice.

FIELD OF THE INVENTION

The present invention relates generally to ad-hoc, multi-node wireless networks and, more particularly, to systems and methods for implementing energy efficient data forwarding mechanisms in such networks.

BACKGROUND OF THE INVENTION

Recently, much research has been directed towards the building of networks of distributed wireless sensor nodes. Sensor nodes in such networks conduct measurements at distributed locations and relay the measurements, via other sensor nodes in the network, to one or more measurement data collection points. Sensor networks, generally, are envisioned as encompassing a large number (N) of sensor nodes (e.g., as many as tens of thousands of sensor nodes), with traffic flowing from the sensor nodes into a much smaller number (K) of measurement data collection points using routing protocols. These routing protocols conventionally involve the forwarding of routing packets throughout the sensor nodes of the network to distribute the routing information necessary for sensor nodes to relay measurements to an appropriate measurement data collection point.

A key problem with conventional sensor networks is that each sensor node of the network operates for extended periods of time on self-contained power supplies (e.g., batteries or fuel cells). For the routing protocols of the sensor network to operate properly, each sensor node must be prepared to receive and forward routing packets at any time. Each sensor node's transmitter and receiver, thus, conventionally operates in a continuous fashion to enable the sensor node to receive and forward the routing packets essential for relaying measurements from a measuring sensor node to a measurement data collection point in the network. This continuous operation depletes each node's power supply reserves and, therefore, limits the operational life of each of the sensor nodes.

Therefore, there exists a need for mechanisms in a wireless sensor network that enable the reduction of sensor node power consumption while, at the same time, permitting the reception and forwarding of the routing packets necessary to implement a distributed wireless network.

SUMMARY OF THE INVENTION

Systems and methods consistent with the present invention address this need and others by providing mechanisms that enable sensor node transmitters and receivers to be turned off, and remain in a “sleep” state, for substantial periods, thus, increasing the energy efficiency of the nodes. Systems and methods consistent with the present invention further implement transmission and reception schedules that permit the reception and forwarding of packets containing routing, or other types of data, during short periods when the sensor node transmitters and receivers are powered up and, thus, “awake.” The present invention, thus, increases sensor node operational life by reducing energy consumption while permitting the reception and forwarding of the routing messages needed to self-organize the distributed network.

In accordance with the purpose of the invention as embodied and broadly described herein, a method of conserving energy in a node in a wireless network includes receiving a first powering-on schedule from another node in the network, and selectively powering-on at least one of a transmitter and receiver based on the received first schedule.

In another implementation consistent with the present invention, a method of conveying messages in a sensor network includes organizing a sensor network into a hierarchy of tiers, transmitting one or more transmit/receive scheduling messages throughout the network, and transmitting and receiving data messages between nodes in adjacent tiers based on the one or more transmit/receive scheduling messages.

In a further implementation consistent with the present invention, a method of conserving energy in a multi-node network includes organizing the multi-node network into tiers, producing a transmit/receive schedule at a first tier in the network, and controlling the powering-on and powering-off of transmitters and receivers in nodes in a tier adjacent to the first tier according to the transmit/receive schedule.

In yet another implementation consistent with the present invention, a method of forwarding messages at a first node in a network includes receiving scheduling messages from a plurality of nodes in the network, selecting one of the plurality of nodes as a parent node, and selectively forwarding data messages to the parent node based on the received scheduling message associated with the selected one of the plurality of nodes.

BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, explain the invention. In the drawings,

FIG. 1 illustrates an exemplary network consistent with the present invention;

FIG. 2 illustrates an exemplary sensor network consistent with the present invention;

FIG. 3 illustrates the exemplary sensor network of FIG. 2 organized into tiers consistent with the present invention;

FIG. 4 illustrates exemplary components of a sensor node consistent with the present invention;

FIG. 5 illustrates exemplary components of a monitor point consistent with the present invention;

FIG. 6A illustrates an exemplary monitor point database consistent with the present invention;

FIG. 6B illustrates exemplary monitor point affiliation/schedule data stored in the database of FIG. 6A consistent with the present invention;

FIG. 7A illustrates an exemplary sensor node database consistent with the present invention;

FIG. 7B illustrates exemplary sensor node affiliation/schedule data stored in the database of FIG. 7A consistent with the present invention;

FIG. 8 illustrates an exemplary schedule message consistent with the present invention;

FIG. 9 illustrates exemplary transmit/receive scheduling consistent with the present invention;

FIGS. 10-11 are flowcharts that illustrate parent/child affiliation processing consistent with the present invention;

FIG. 12 is a flowchart that illustrates exemplary monitor point scheduling processing consistent with the present invention;

FIGS. 13-16 are flowcharts that illustrate sensor node schedule message processing consistent with the present invention;

FIG. 17 illustrates an exemplary message transmission diagram consistent with the present invention; and

FIG. 18 illustrates exemplary node receiver timing consistent with the present invention.

FIG. 19 illustrates exemplary components of a sensor node consistent with the present invention;

FIG. 20 illustrates exemplary components of a monitor point consistent with the present invention;

FIG. 21A illustrates a first exemplary database consistent with the present invention;

FIG. 21B illustrates an exemplary sensor forwarding table stored in the database of FIG. 5A consistent with the present invention;

FIG. 22A illustrates a second exemplary database consistent with the present invention;

FIG. 22B illustrates an exemplary monitor point table stored in the database of FIG. 6A consistent with the present invention;

FIG. 23 illustrates an exemplary monitor point beacon message consistent with the present invention;

FIGS. 24-25 are flowcharts that illustrate exemplary monitor point beacon message processing consistent with the present invention;

FIG. 26 illustrates an exemplary sensor node beacon message consistent with the present invention;

FIG. 27 is a flowchart that illustrates exemplary sensor node beacon message processing consistent with the present invention;

FIGS. 28-31 are flowcharts that illustrate exemplary sensor node forwarding table update processing consistent with the present invention;

FIG. 32 illustrates an exemplary sensor datagram consistent with the present invention;

FIGS. 33-34 are flowcharts that illustrate exemplary sensor node datagram processing consistent with the present invention;

FIGS. 35-39 are flowcharts that illustrate exemplary sensor node datagram relay processing consistent with the present invention; and

FIGS. 40-43 are flowcharts that illustrate exemplary monitor point datagram processing consistent with the present invention.

DETAILED DESCRIPTION

The following detailed description of the invention refers to the accompanying drawings. The same reference numbers in different drawings identify the same or similar elements. Also, the following detailed description does not limit the invention. Instead, the scope of the invention is defined by the appended claims.

Systems and methods consistent with the present invention provide mechanisms for conserving energy in wireless nodes by transmitting scheduling messages throughout the nodes of the network. The scheduling messages include time schedules for selectively powering-on and powering-off node transmitters and receivers. Message datagrams and routing messages may, thus, be conveyed throughout the network during appropriate transmitter/receiver power-on and power-off intervals.

Exemplary Network

FIG. 1 illustrates an exemplary network 100, consistent with the present invention. Network 100 may include monitor points 105a-105n connected to sensor network 110 and network 115 via wired 120, wireless 125, or optical connection links (not shown). Network 100 may further include one or more servers 130 interconnected with network 115.

Monitor points 105a-105n may include data transceiver units for transmitting messages to, and receiving messages from, one or more sensors of sensor network 110. Such messages may include routing messages containing network routing data, message datagrams containing sensor measurement data, and schedule messages containing sensor node transmit and receive scheduling data. The routing messages may include identification data for one or more monitor points, and the number of hops to reach each respective identified monitor point, as determined by a sensor node/monitor point that is the source of the routing message. The routing messages may be transmitted as wireless broadcast messages in network 100. The routing messages, thus, permit sensor nodes to determine a minimum hop path to a monitor point in network 100. Through the use of routing messages, monitor points 105a-105n may operate as “sinks” for sensor measurements made at nearby sensor nodes. Message datagrams may include sensor measurement data that may be transmitted to a monitor point 105a-105n for data collection. Message datagrams may be sent from a monitor point to a sensor node, from a sensor node to a monitor point, or from a sensor node to a sensor node.

In one embodiment, monitor points 105a-105n may include data transceiver units for transmitting messages to and from one or more sensors of sensor network 110. Such messages may include beacon messages and message datagrams. Beacon messages may include identification data for one or more monitor points, and the number of hops to reach each respective identified monitor point, as determined by a sensor node/monitor point that is the source of the beacon message. Beacon messages may be transmitted as wireless broadcast messages in network 100. Beacon messages, thus, permit sensor nodes to determine a minimum hop path to a monitor point in network 100. Through the use of beacon messages, monitor points 105a-105n may operate as “sinks” for sensor measurements made at nearby sensor nodes. Message datagrams may be sent from a monitor point to a sensor node, from a sensor node to a monitor point, or from a sensor node to a sensor node. Message datagrams may include path information for transmitting message datagrams, hop by hop, from one node in network 100 to another node in network 100. Message datagrams may further include sensor measurement data that may be transmitted to a monitor point 105a-105n for data collection.

Sensor network 110 may include one or more distributed sensor nodes (not shown) that may organize themselves into an ad-hoc, multi-hop wireless network. Each of the distributed sensor nodes of sensor network 110 may include one or more of any type of conventional sensing device, such as, for example, acoustic sensors, motion-detection sensors, radar sensors, sensors that detect specific chemicals or families of chemicals, sensors that detect nuclear radiation or biological agents, magnetic sensors, electronic emissions signal sensors, thermal sensors, and visual sensors that detect or record still or moving images in the visible or other spectrum. Sensor nodes of sensor network 110 may perform one or more measurements over a sampling period and transmit the measured values via packets, datagrams, cells or the like to monitor points 105a-105n.

Network 115 may include one or more networks of any type, including a Public Land Mobile Network (PLMN), Public Switched Telephone Network (PSTN), local area network (LAN), metropolitan area network (MAN), wide area network (WAN), Internet, or Intranet. The one or more PLMNs may further include packet-switched sub-networks, such as, for example, General Packet Radio Service (GPRS), Cellular Digital Packet Data (CDPD), and Mobile IP sub-networks.

Server 130 may include a conventional computer, such as a desktop, laptop or the like. Server 130 may collect data, via network 115, from each monitor point 105 of network 100 and archive the data for future retrieval.

Exemplary Sensor Network

FIG. 2 illustrates an exemplary sensor network 110 consistent with the present invention. Sensor network 110 may include one or more sensor nodes 205a-205s that may be distributed across a geographic area. Sensor nodes 205a-205s may communicate with one another, and with one or more monitor points 105a-105n, via wireless or wire-line links (not shown), using, for example, packet-switching mechanisms. Using techniques such as those described in patent application Ser. No. 09/999,353, entitled “Systems and Methods for Scalable Routing in Ad-Hoc Wireless Sensor Networks” and filed Nov. 15, 2001 (the disclosure of which is incorporated by reference herein), sensor nodes 205a-205s may organize themselves into an ad-hoc, multi-hop wireless network through the communication of routing messages and message datagrams.

In one embodiment, sensor nodes 205a-205s may organize themselves into an ad-hoc, multi-hop wireless network through the communication of beacon messages and message datagrams. Beacon messages may be transmitted as wireless broadcast messages and may include identification data for one or more monitor points, and the number of hops to reach each respective identified monitor point, as determined by a sensor node/monitor point that is the source of the beacon message. Message datagrams may include path information for transmitting the message datagrams, hop by hop, from one node in network 100 to another node in network 100. Message datagrams may further include sensor measurement data that may be transmitted to a monitor point 105a-105n for data collection.

FIG. 3 illustrates sensor network 110 self-organized into tiers using conventional routing protocols, or the routing protocol described in the above-described patent application Ser. No. 09/999,353. When organized into tiers, messages may be forwarded, hop by hop through the network, from monitor points to sensor nodes, or from individual sensor nodes to monitor points that act as “sinks” for nearby sensor nodes. As shown in the exemplary network configuration illustrated in FIG. 3, monitor point MP1 105a may act as a “sink” for message datagrams from sensor nodes 205a-205e, monitor point MP2 105b may act as a “sink” for message datagrams from sensor nodes 205f-205l, and monitor point MP3 105n may act as a “sink” for message datagrams from sensor nodes 205m-205s.

As further shown in FIG. 3, monitor point MP1 105a may reside in MP1 tier 0 305, sensor nodes 205a-205c may reside in MP1 tier 1 310, and sensor nodes 205d-205e may reside in MP1 tier 2 315. Monitor point MP2 105b may reside in MP2 tier 0 320, sensor nodes 205f-205h may reside in MP2 tier 1 325, sensor nodes 205i-205k may reside in MP2 tier 2 330 and sensor node 205l may reside in MP2tier 3 335. Monitor point MP3 105n may reside in MP3 tier 0 340, sensor nodes 205m-205o may reside in MP3 tier 1 345, sensor nodes 205p-205q may reside in MP3 tier 2 350 and sensor nodes 205r-205s may reside in MP3 tier 3 355. Each tier shown in FIG. 3 represents an additional hop that data must traverse when traveling from a sensor node to a monitor point, or from a monitor point to a sensor node. At least one node in any tier may act as a “parent” for nodes in the next higher tier (e.g., MP1 Tier 2 315). Thus, for example, sensor node 205a acts as a “parent” node for sensor nodes 205d-205e. Sensor nodes 205d-205e may relay all messages through sensor node 205a to reach monitor point MP1 105a.

Exemplary Sensor Node

FIG. 4 illustrates exemplary components of a sensor node 205 consistent with the present invention. Sensor node 205 may include a transmitter/receiver 405, an antenna 410, a processing unit 415, a memory 420, an optional output device(s) 425, an optional input device(s) 430, one or more sensor units 435a-435n, a clock 440, and a bus 445.

Transmitter/receiver 405 may connect sensor node 205 to a monitor point 105 or another sensor node. For example, transmitter/receiver 405 may include transmitter and receiver circuitry well known to one skilled in the art for transmitting and/or receiving data bursts via antenna 410.

Processing unit 415 may perform all data processing functions for inputting, outputting and processing of data including data buffering and sensor node control functions. Memory 420 may include random access memory (RAM) and/or read only memory (ROM) that provides permanent, semi-permanent, or temporary working storage of data and instructions for use by processing unit 415 in performing processing functions. Memory 420 may also include large-capacity storage devices, such as magnetic and/or optical recording devices. Output device(s) 425 may include conventional mechanisms for outputting data in video, audio and/or hard copy format. For example, output device(s) 425 may include a conventional display for displaying sensor measurement data. Input device(s) 430 may permit entry of data into sensor node 205. Input device(s) 430 may include, for example, a touch pad or keyboard.

Sensor units 435a-435n may include one or more of any type of conventional sensing device, such as, for example, acoustic sensors, motion-detection sensors, radar sensors, sensors that detect specific chemicals or families of chemicals, sensors that detect nuclear radiation or sensors that detect biological agents such as anthrax. Each sensor unit 435a-435n may perform one or more measurements over a sampling period and transmit the measured values via packets, cells, datagrams, or the like to monitor points 105a-105n. Clock 440 may include conventional circuitry for maintaining a time base to enable the maintenance of a local time at sensor node 205. Alternatively, sensor node 205 may derive a local time from an external clock signal, such as, for example, a GPS signal, or from an internal clock synchronized to an external time base.

Bus 445 may interconnect the various components of sensor node 205 and permit them to communicate with one another.

FIG. 19 illustrates exemplary components of a sensor node 205 consistent with the present invention. Sensor node 205 may include a communication interface 1905, an antenna 1910, a processing unit 1915, a memory 1920, an optional output device(s) 1925, an optional input device(s) 1930, an optional geo-location unit 1935, one or more sensor units 1940a-1940n, and a bus 1945.

Communication interface 1905 may connect sensor node 205 to a monitor point 105 or another sensor node. For example, communication interface 1905 may include transceiver circuitry well known to one skilled in the art for transmitting and/or receiving data bursts via antenna 1910.

Processing unit 1915 may perform all data processing functions for inputting, outputting and processing of data including data buffering and sensor node control functions. Memory 1920 provides permanent, semi-permanent, or temporary working storage of data and instructions for use by processing unit 1915 in performing processing functions. Memory 1920 may include large-capacity storage devices, such as magnetic and/or optical recording devices. Output device(s) 1925 may include conventional mechanisms for outputting data in video, audio and/or hard copy format. For example, output device(s) 1925 may include a conventional display for displaying sensor measurement data. Input device(s) 1930 may permit entry of data into sensor node 205. Input device(s) 1930 may include, for example, a touch pad or keyboard.

Geo-location unit 1935 may include a conventional device for determining a geo-location of sensor node 205. For example, geo-location unit 1935 may include a Global Positioning System (GPS) receiver that can receive GPS signals and can determine corresponding geo-locations in accordance with conventional techniques.

Sensor units 1940a-1940n may include one or more of any type of conventional sensing device, such as, for example, acoustic sensors, motion-detection sensors, radar sensors, sensors that detect specific chemicals or families of chemicals, sensors that detect nuclear radiation or sensors that detect biological agents such as anthrax. Each sensor unit 1940a-1940n may perform one or more measurements over a sampling period and transmit the measured values via packets, cells, datagrams, or the like to monitor points 105a-105n.

Bus 1945 may interconnect the various components of sensor node 205 and permit them to communicate with one another.

Exemplary Monitor Point

FIG. 5 illustrates exemplary components of a monitor point 105 consistent with the present invention. Monitor point 105 may include a transmitter/receiver 505, an antenna 510, a processing unit 515, a memory 520, an input device(s) 525, an output device(s) 530, network interface(s) 535, a clock 540, and a bus 545.

Transmitter/receiver 505 may connect monitor point 105 to another device, such as another monitor point or one or more sensor nodes. For example, transmitter/receiver 505 may include transmitter and receiver circuitry well known to one skilled in the art for transmitting and/or receiving data bursts via antenna 510.

Processing unit 515 may perform all data processing functions for inputting, outputting, and processing of data. Memory 520 may include Random Access Memory (RAM) that provides temporary working storage of data and instructions for use by processing unit 515 in performing processing functions. Memory 520 may additionally include Read Only Memory (ROM) that provides permanent or semi-permanent storage of data and instructions for use by processing unit 515. Memory 520 can also include large-capacity storage devices, such as a magnetic and/or optical device.

Input device(s) 525 permits entry of data into monitor point 105 and may include a user interface (not shown). Output device(s) 530 permits the output of data in video, audio, or hard copy format. Network interface(s) 535 interconnects monitor point 105 with network 115. Clock 540 may include conventional circuitry for maintaining a time base to enable the maintenance of a local time at monitor point 105. Alternatively, monitor point 105 may derive a local time from an external clock signal, such as, for example, a GPS signal, or from an internal clock synchronized to an external time base.

Bus 545 interconnects the various components of monitor point 105 to permit the components to communicate with one another.

FIG. 20 illustrates exemplary components of a monitor point 105 consistent with the present invention. Monitor point 105 may include a communication interface 2005, an antenna 2010, a processing unit 2015, a memory 2020, an input device(s) 2025, an output device 2030, network interface(s) 2035 and a bus 2040.

Communication interface 2005 may connect monitor point 105 to another device, such as another monitor point or one or more sensor nodes. For example, communication interface 2005 may include transceiver circuitry well known to one skilled in the art for transmitting and/or receiving data bursts via antenna 2010.

Processing unit 2015 may perform all data processing functions for inputting, outputting, and processing of data. Memory 2020 may include Random Access Memory (RAM) that provides temporary working storage of data and instructions for use by processing unit 2015 in performing processing functions. Memory 2020 may additionally include Read Only Memory (ROM) that provides permanent or semi-permanent storage of data and instructions for use by processing unit 2015. Memory 2020 can also include large-capacity storage devices, such as a magnetic and/or optical device.

Input device(s) 2025 permits entry of data into monitor point 105 and may include a user interface (not shown). Output device(s) 2030 permits the output of data in video, audio, or hard copy format. Network interface(s) 2035 interconnects monitor point 105 with network 115.

Bus 2040 interconnects the various components of monitor point 105 to permit the components to communicate with one another.

Exemplary Monitor Point Database

FIG. 6A illustrates an exemplary database 600 that may be stored in memory 520 of a monitor point 105. Database 600 may include monitor point affiliation/schedule data 605 that includes identifiers of sensor nodes affiliated with monitor point 105, and scheduling data indicating times at which monitor point 105 may transmit to, or receive bursts of data from, affiliated sensor nodes. FIG. 6B illustrates exemplary data that may be contained in monitor point affiliation/schedule data 605. Monitor point affiliation/schedule data 605 may include “affiliated children IDs” data 610 and “Tx/Rx schedule” data 615. “Tx/Rx schedule” data 615 may further include “parent Tx” 620 data, “child-to-parent Tx” data 625, and “next tier activity” data 630.

“Affiliated children IDs” data 610 may include unique identifiers of sensor nodes 205 that are affiliated with monitor point 105 and, thus, from which monitor point 105 may receive messages. “Parent Tx” data 620 may include a time at which monitor point 105 may transmit messages to sensor nodes identified by the “affiliated children IDs” data 610. “Child-to-Parent Tx” data 625 may include times at which sensor nodes identified by “affiliated children IDs” 610 may transmit messages to monitor point 105. “Next Tier Activity” data 630 may include times at which sensor nodes identified by the “affiliated children IDs” data 610 may transmit messages to, and receive messages from, their affiliated children.

FIG. 21A illustrates an exemplary database 2100 that may be stored in memory 2020 of a monitor point 105. Database 2100 may include a monitor point table 2105 that further includes data regarding sensor nodes 205 in network to which monitor point 105 can transmit message datagrams. FIG. 21B illustrates an exemplary monitor point table 2105 that may include one or more table entries 2110. Each entry 2110 in monitor point table 2105 may include a number of fields, including a “sensor ID” field 2115, a “geo-location” field 2120, a “sensor message” field 2125, a “# of Nodes” field 2130, a “1st Hop” field 2135a through a “Nth Hop” field 2135N, and a “Seq #” field 2140.

“Sensor ID” field 2115 may indicate a unique identifier for a sensor node 205 in sensor network 110. “Geo-location” field 2120 may indicate a geographic location associated with a sensor node 205 identified by a corresponding “sensor ID” field 2115. “Sensor message” field 2125 may include a message, such as, for example, data from measurements performed at the sensor node 205 identified by a corresponding “sensor ID” field 2115. “# of Nodes” field 2130 may indicate the number of hops across sensor network 110 to reach the sensor node 205 identified by a corresponding “sensor ID” field 2115 from monitor point 105. “1st Hop” field 2135a through “Nth Hop” field 2135N may indicate the unique identifier of each sensor node 205 in network 110 that a message datagram must hop to reach the sensor node 205 identified by a corresponding “sensor ID” field 2115 from monitor point 105. “Seq #” field 2140 may include a startup number, counter number and time stamp sub-fields (not shown) corresponding to sequencing data that may be extracted from received beacon messages (see FIG. 26 below).

Exemplary Sensor Node Database

FIG. 7A illustrates an exemplary database 700 that may be stored in memory 420 of a sensor node 205. Database 700 may include sensor affiliation/schedule data 705 that may further include data indicating which sensor nodes are affiliated with sensor node 205 and indicating schedules for sensor node 205 to transmit and receive messages.

FIG. 7B illustrates exemplary sensor affiliation/schedule data 705. Sensor affiliation/schedule data 705 may include “designated parent ID” data 710, “parent's schedule” data 715, “derived schedule” data 720, and “affiliated children IDs” data 725. “Designated parent ID” data 710 may include a unique identifier that identifies the “parent” node, in a lower tier of sensor network 110, to which sensor node 205 forwards messages. “Parent's schedule” data 715 may further include “parent Tx” data 620, “child-to-parent Tx” data 625 and “next tier activity” data 630. “Derived schedule” data 720 may further include “this node Tx” data 730, “children-to-this node Tx” data 735, and “this node's next tier activity” data 740. “This node Tx data 730 may indicate a time at which sensor node 205 forwards messages to sensor nodes identified by “affiliated children IDs” data 725. “Children-to-this node Tx” data 735 may indicate times at which sensor nodes identified by “affiliated children IDs” data 725 may forward messages to sensor node 205. “This node's next tier activity” 740 may indicate one or more time periods allocated to sensor nodes in the next higher tier for transmitting and receiving messages.

FIG. 22A illustrates an exemplary database 2200 that may be stored in memory 1920 of a sensor node 205. Database 2200 may include a sensor forwarding table 2205 that further includes data for forwarding beacon messages and/or message datagrams received at a sensor node 205.

FIG. 22B illustrates an exemplary sensor forwarding table 2205 that may include one or more table entries 2210. Each entry 2210 in sensor forwarding table 2205 may include a number of fields, including a “Use?” field 2215, a “time stamp” field 2220, a “monitor point ID” field 2225, a “Seq #” field 2230, a “next hop” field 2235, a “# of hops” field 2240 and a “valid” field 2245.

“Use?” field 2215 may identify the “monitor point ID” field 2225 that sensor node 205 will use to identify the monitor point 105 to which it will send all of its message datagrams. The identified monitor point may include the monitor point that has the least number of hops to reach from sensor node 205. “Time stamp” field 2220 may indicate a time associated with each entry 2210 in sensor forwarding table 2205. “Monitor point ID” field 2225 may include a unique identifier that identifies a monitor point 105 in network 100 associated with each entry 2210 in forwarding table 2205. “Seq #” field 2230 may include the sequence number of the most recent beacon message received from the monitor point 105 identified in the corresponding “monitor point ID” field 2225. “Seq #” field 2230 may further include “startup number,” “counter number,” and “time stamp” sub-fields (not shown). The “startup number” sub-field may include a number indicating how many times the monitor point 105 identified in the corresponding “monitor point ID” field 2225 has been powered up. The “counter number” sub-field may include a number indicating the number of times the monitor point 105 identified in the corresponding “monitor point ID” field 2225 has sent a beacon message. The “time stamp” sub-field may include a time at which the monitor point 105 identified in “monitor point ID” field 2225 sent a beacon message from which the data in the corresponding entry 2210 was derived. Monitor point 105 may derive the time from an external clock signal, such as, for example, a GPS signal, or from an internal clock synchronized to an external time base.

“Next hop” field 2235 may include an identifier for a neighboring sensor node from which the sensor node 205 received information concerning the monitor point 105 identified by “monitor point ID” field 2225. The “# of hops” field 2240 may indicate the number of hops from sensor node 205 to reach the monitor point 105 identified by the corresponding “monitor point ID” field 2225. “Valid” field 2245 may indicate whether data stored in the fields of the corresponding table entry 2210 should be propagated in beacon messages sent from sensor node 205.

Exemplary Schedule Message

FIG. 8 illustrates an exemplary schedule message 800 that maybe transmitted from a monitor point 105 or sensor node 205 for scheduling message transmit and receive times within sensor network 110. Schedule message 800 may include a number of data fields, including “transmitting node ID” data 805, “parent Tx” data 620, and “next-tier node transmit schedule” data 810. “Next-tier node transmit schedule” 810 may further include “child-to-parent Tx” data 625 and “next tier activity” data 630. “Transmitting node ID” data 805 may include a unique identifier of the monitor point 105 or sensor node 205 originating the schedule message 800.

Exemplary Transmit/Receive Scheduling

FIG. 9 illustrates exemplary transmit/receive scheduling that may be employed at each sensor node 205 of network 110 according to schedule messages 800 received from “parent” nodes in a lower tier. The first time period shown on the scheduling timeline, Parent Tx time 620, may include the time period allocated by a “parent” node to transmitting messages from the “parent” node to its affiliated children. The time periods “child-to-parent Tx” 625 may include time periods allocated to each affiliated child of a parent node for transmitting messages to the parent node. During the “child-to-parent Tx” 625 time periods, the receiver of the parent node may be turned on to receive messages from the affiliated children.

The “next tier activity” 630 may include time periods allocated to each child of a parent node for transmitting messages to, and receiving messages from, each child's own children nodes. From the time periods allocated to the children of a parent node, each child may construct its own derived schedule. This derived schedule may include a time period, “this node Tx” 730 during which the child node may transmit to its own affiliated children. The derived schedule may further include time periods, “children-to-this node Tx” 735 during which these affiliated children may transmit messages to the parent's child node. The derived schedule may additionally include time periods, designated “this node's next tier activity” 740, that may be allocated to this node's children so that they may, in turn, construct their own derived schedule for their own affiliated children.

Exemplary Parent/Child Affiliation Processing

FIGS. 10-11 are flowcharts that illustrate exemplary processing, consistent with the present invention, for affiliating “child” sensor nodes 205 with “parent” nodes in a lower tier. Such “parent” nodes may include other sensor nodes 205 in sensor network 110 or monitor points 105. As one skilled in the art will appreciate, the method exemplified by FIGS. 10 and 11 can be implemented as a sequence of instructions and stored in memory 420 of sensor node 205 for execution by processing unit 415.

An unaffiliated sensor node 205 may begin parent/child affiliation processing by turning on its receiver 405 and continuously listening for schedule message(s) transmitted from a lower tier of sensor network 110 [step 1005] (FIG. 10). Sensor node 205 may be unaffiliated with any “parent” node if it has recently been powered on. Sensor node 205 may also be unaffiliated if it has stopped receiving schedule messages from its “parent” node for a specified time period. If one or more schedule messages are received [step 1010], unaffiliated sensor node 205 may select a neighboring node to designate as a parent [step 1015]. For example, sensor node 205 may select a neighboring node whose transmit signal has the greatest strength or the least bit error rate (BER). Sensor node 205 may insert the “transmitting node ID” data 805 from the corresponding schedule message 800 of the selected neighboring node into the “designated parent ID” data 710 of database 700 [step 1020]. Sensor node 205 may then update database 700's “parent's schedule” data 715 with “parent Tx” data 620, “child-to-parent Tx” data 625, and “next tier activity” data 630 from the corresponding schedule message 800 of the selected neighboring node [step 1025].

Sensor node 205 may determine if any affiliation messages have been received from sensor nodes residing in higher tiers [step 1105] (FIG. 11). If so, sensor node 205 may store message node identifiers contained in the affiliation messages in database 700's “affiliation children IDs” data 725 [step 1110]. Sensor node 205 may also transmit an affiliation message to the node identified by “designated parent ID” data 710 in database 700 [step 1115]. Sensor node 205 may further determine a derived schedule from the “next tier activity” data 630 in database 700 [step 1120] and store in the “derived schedule” data 720.

Exemplary Monitor Point Message Processing

FIG. 12 is a flowchart that illustrates exemplary processing, consistent with the present invention, for receiving affiliation messages and transmitting schedule messages at a monitor point 105. As one skilled in the art will appreciate, the method exemplified by FIG. 12 can be implemented as a sequence of instructions and stored in memory 520 of monitor point 105 for execution by processing unit 515.

Monitor point message processing may begin with a monitor point 105 receiving one or more affiliation messages from neighboring sensor nodes [step 1205] (FIG. 12). Monitor point 105 may insert the node identifiers from the received affiliation message(s) into database 600's “affiliation children IDs” data 610 [step 1210]. Monitor point 105 may construct the “Tx/Rx schedule” 615 based on the number of affiliated children indicated in “affiliated children IDs” data 610 [step 1215]. Monitor point 105 may then transmit a schedule message 800 to sensor nodes identified by “affiliated children IDs” data 610 containing monitor point 105's “transmitting node ID” data 805, “parent Tx” data 620, and “next-tier transmit schedule” data 810 [step 1220]. Schedule message 800 may be transmitted periodically using conventional multiple access mechanisms, such as, for example, Carrier Sense Multiple Access (CSMA). Subsequent to transmission of schedule message 800, monitor point 105 may determine if acknowledgements (ACKs) have been received from all affiliated children [step 1225]. If not, monitor point 105 may re-transmit the schedule message 800 at regular intervals until ACKs are received from all affiliated children [step 1230]. In this manner, monitor point 105 coordinates and schedules the power on/off intervals of the sensor nodes that is associated with (i.e., the nodes with which it transmits/receives data from).

Exemplary Message Reception/Transmission Processing

FIGS. 13-16 are flowcharts that illustrate exemplary processing, consistent with the present invention, for receiving and/or transmitting messages at a sensor node 205. As one skilled in the art will appreciate, the method exemplified by FIGS. 13-16 can be implemented as a sequence of instructions and stored in memory 420 of sensor node 205 for execution by processing unit 415. The exemplary reception and transmission of messages at a sensor node as illustrated in FIGS. 13-16 is further demonstrated with respect to the exemplary messages transmission diagram illustrated in FIG. 17.

Sensor node 205 (“This node” 1710 of FIG. 17) may begin processing by determining if it is the next parent transmit time as indicated by clock 440 and the “parent Tx” data 620 of database 700 [step 1305]. If so, sensor node 205 may turn on receiver 405 [step 1310] (FIG. 13) and listen for messages transmitted from a parent (see also “Parent Node” 1705 of FIG. 17). If no messages are received, sensor node 205 determines if a receive timer has expired [step 1405] (FIG. 14). The receive timer may indicate a maximum time period that sensor node 205 (see “This Node” 1710 of FIG. 17) may listen for messages before turning off receiver 405. If the receive timer has not expired, processing may return to step 1315. If the receive timer has expired, sensor node 205 may turn off receiver 405 [step 1410]. If messages have been received (see “Parent TX” 620 of FIG. 17), sensor node 205 may, optionally, transmit an ACK to the parent node that transmitted the messages [step 1320]. Sensor node 205 may then turn off receiver 405 [step 1325].

Inspecting the received messages, sensor node 205 may determine if sensor node 205 is the destination of each of the received messages [step 1330]. If so, sensor node 205 may process the message [step 1335]. If not, sensor node 205 may determine a next hop in sensor network 110 for the message using conventional routing tables, and place the message in a forwarding queue [step 1340]. At step 1415, sensor node 205 may determine if it is time to transmit messages to the parent node as indicated by “child-to-parent Tx” data 625 of database 700 (see “child-to-parent Tx” 625 of FIG. 17). If not, sensor node 205 may sleep until clock 440 indicates that it is time to transmit messages to the parent node [step 1420]. If clock 440 and “child-to-parent Tx” data 625 indicate that it is time to transmit messages to the parent node, sensor node 205 may turn on transmitter 405 and transmit all messages intended to go to the node indicated by the “designated parent ID” data 710 of database 700 [step 1425]. After all messages are transmitted to the parent node, sensor node 205 may turn off transmitter 405 [step 1430].

Sensor node 205 may create a new derived schedule for it's children identified by “affiliated children IDs” data 725, based on the “parent's schedule” 715, and may then store the new derived schedule in the “derived schedule” data 720 of database 700 [step 1435]. Sensor node 205 may inspect the “this node Tx” data 730 of database 700 to determine if it is time to transmit to the sensor nodes identified by the “affiliated children IDs” data 725 [step 1505] (FIG. 15). If so, sensor node 205 may turn on transmitter 405 and transmit messages, including schedule messages, to its children [step 1510] (see “This Node Tx” 730, FIG. 17). For each transmitted message, sensor node 205 may, optionally, determine if an ACK is received [step 1515]. If not, sensor node 205 may further, optionally, re-transmit the corresponding message at a regular interval until an ACK is received [step 1520]. When all ACKs are received, sensor node 205 may turn off transmitter 405 [step 1525]. Sensor node 205 may then determine if it is time for its children to transmit to sensor node 205 as indicated by clock 440 and “children-to-this node Tx” data 735 of database 700 [step 1605] (FIG. 16). If so, sensor node 205 may turn on receiver 405 and receive one or messages from the children identified by the “affiliated children IDs” data 725 of database 700 [step 1610] (see “Children-to-this Node Tx” 735, FIG. 17). Sensor node 205 may then turn off receiver 405 [step 1615] and processing may return to step 1305 (FIG. 13). In this manner, sensor nodes may power on and off their transmitters and receivers at appropriate times to conserve energy, while still performing their intended functions in network 100.

Exemplary Receiver Timing

FIG. 18 illustrates exemplary receiver timing when monitor points 105 or sensor nodes 205 of network 100 use internal clocks that may have inherent “clock drift.” “Clock drift” occurs when internal clocks runs faster or slower than the true elapsed time and may be inherent in many types of internal clocks employed in monitor points 105 or sensor nodes 205. “Clock drift” may be taken into account when scheduling the time at which a node's receiver must be turned on, since both the transmitting node and the receiving node may both have drifting clocks. As shown in FIG. 18, Tnominal 1805 represents the next time at which a receiver must be turned on based on scheduling data contained in the schedule message received from a parent node. A “Rx Drift Window” 1810 exists around this time which represents Tnominal plus or minus the “Max Rx Drift” 1815 for this node over the amount of time remaining until Tnominal. If the transmitting node has zero clock drift, the receiving node should, thus, wake up at the beginning of its “Rx Drift Window” 1810.

The clock at the transmitting node may also incur clock drift, “Max Tx Drift” 1820, that must be accounted for at the receiving node when turning on and off the receiver. The receiving node should, thus, turn on its receiver at a local clock time that is “Max Tx Drift” 1820 plus “Max Rx Drift” 1815 before Tnominal. The receiving node should also turn off its receiver at a local clock time that is “Max Rx Drift” 1815 plus “Max Tx Drift” 1820 plus a maximum estimated time to receive a packet from the transmitting node (TRX 1825). TRX 1825 may include packet transmission time and packet propagation time. By taking into account maximum estimated clock drift at both the receiving node and transmitting node, monitor points 105 and sensor nodes 205 of sensor network 110 may successfully implement transmit/receive scheduling as described above with respect to FIGS. 1-17.

Exemplary Monitor Point Beacon Message Processing

FIG. 23 illustrates an exemplary beacon message 2300 that may be transmitted from a monitor point 105. Beacon message 2300 may include a number of fields, including a “transmitter node ID” field 2305, a “checksum” field 2310, a “NUM” field 2315, a monitor point “D(i) sequence #” field 2320 and a “# of hops to D(i)” field 2325.

“Transmitter node ID” field 2305 may include a unique identifier that identifies the node in network 100 that is the source of beacon message 2300. “Checksum” field 2310 may include any type of conventional error detection value that can be used to determine the presence of errors or corruption in beacon message 2300. “NUM” field 2315 may indicate the number of different monitor points 105 that are described in beacon message 2300. When beacon message 2300 is sent directly from a monitor point 105, “NUM” field 2315 can be set to one, indicating that the message describes only a single monitor point. “D(i) Sequence #” field 2320 may include a “startup number” sub-field 2330, a “counter number” sub-field 2335, and an optional “time stamp” sub-field 2340 corresponding to the monitor point 105 identified by “transmitter node ID” field 2305. “Startup number” sub-field 2330 may include a large number of data bits, such as, for example, 32 bits and may be stored in memory 2020. “Startup number” 2330 may be set to zero when monitor point 105 is initially manufactured. At every power-up of monitor point 105, processing unit 2015 can read the “startup number” stored in memory 2020, increment the number, and store the incremented startup number back in memory 2020. “Startup number” 2330, thus, maintains a log of how many times monitor point 105 has been powered up.

“Counter number” sub-field 2335 may be set to zero whenever monitor point 105 powers up. Counter number sub-field 2335 may further be incremented by one each time monitor point sends a beacon message 2300. “Start-up number” sub-field 2330 combined with “counter number” sub-field 2335 may, thus, provide a unique determination of which beacon message 2300 has been constructed and sent at a later time than other beacon messages. “Time stamp” subfield 2340 may include a time at which the monitor point 105 sends beacon message 2300. “# hops to D(i)” field 2325 may be set to zero, indicating that beacon message 2300 has been sent directly from monitor point 105.

FIGS. 24-25 are flowcharts that illustrate exemplary processing, consistent with the present invention, for constructing and transmitting beacon messages from a monitor point 105. As one skilled in the art will appreciate, the method exemplified by FIGS. 24-25 can be implemented as a sequence of instructions and stored in memory 2020 of monitor point 105 for execution by processing unit 2015.

Monitor point 105 may begin processing when monitor point 105 powers up from a powered down state [step 2405]. At power-up, monitor point 105 may retrieve an old “startup number” 2330 stored in memory 2020 [step 2410] and may increment the retrieved “startup number” 2330 by one [step 2415]. Monitor point 105 may then store the incremented “startup number 2330” in memory 2020 [step 2420].

Monitor point 105 may determine if an interval timer is equal to a value P [step 2425]. The interval timer may be implemented, for example, in processing unit 2015. Value P may be preset at manufacture, or may be entered or changed via input device(s) 2025. If the interval timer is equal to the value P, then monitor point 105 may formulate a beacon message 2300 that may include the “transmitter node ID” field 2305 set to monitor point 105's unique identifier, “NUM” field 2315 set to one, “startup number” sub-field 2330 set to the “startup number” currently stored in memory 2020, “counter number” sub-field 2335 set to the “counter number” currently stored in memory 2020, “time stamp” sub-field 2340 set to a current time and the “# hops to D(i)” field 2340 set to zero [step 2430]. Monitor point 105 may then calculate a checksum value of the formulated message and store the resultant checksum in “checksum” field 2310 [step 2505](FIG. 25). Monitor point 105 may transmit the formulated beacon message 2300 [step 2510] and increment “counter number” 2335 stored in memory 2020 [step 2515]. Processing may repeat at step 2425 until power down of monitor point 105.

Exemplary Sensor Node Beacon Message Processing

FIG. 26 illustrates an exemplary beacon message 2600 that may be transmitted from a sensor node 205. Beacon message 2600 may include any number of fields, including “transmitter node ID” field 2305, “checksum” field 2310, “NUM” field 2315, monitor point “D(i) identifier” fields 2605a-2605n, monitor point “D(i) sequence #” fields 2610a-2610n, and monitor point “# of Hops to D(i)” fields 2615a-2615n.

“D(i) identifier” fields 2605a-2605n may identify monitor points 105 from which a sensor node 205 has received beacon messages 2300. “D(i) sequence #” fields 2610a-2610n may include “startup number” sub-fields 2620a-2620n, “counter number” sub-fields 2625a-2625n, and “time stamp” sub-fields 2630a-2630n associated with a monitor point 105 identified by a corresponding “D(i) identifier” field 2605. “# of hops to D(i)” fields 2615a-2615n may indicate the number of hops in sensor network 110 to reach the monitor point 105 identified by the corresponding “D(i) identifier” field 2605.

FIG. 27 is a flowchart that illustrates exemplary processing, consistent with the present invention, for constructing and transmitting a beacon message 2600 at a sensor node 205. As one skilled in the art will appreciate, the method exemplified by FIG. 27 can be implemented as a sequence of instructions and stored in memory 1920 of sensor node 205 for execution by processing unit 1915.

Sensor node 205 may begin processing by setting “transmitter node ID” field 2305 to sensor node 205's unique identifier [step 2705]. Sensor node 205 may further set “NUM” field 2315 to the number of entries in sensor forwarding table 2105 for which the “valid” field 2145 equals one [step 2710]. For each valid entry 2110 in sensor forwarding table 2105, sensor node 205 may increment the “# of Hops” field 2140 by one and copy information from the entry 2110 into a corresponding field of beacon message 2600 [step 2715].

Sensor node 205 may then calculate a checksum of beacon message 2600 and store the calculated value in “checksum” field 2310 [step 2720]. Sensor node 205 may then transmit beacon message 2600 every s seconds, repeating steps 2705-2720 for each transmitted beacon message 2600 [step 2725]. The value s may be set at manufacture or may be entered or changed via input device(s) 1930. Every multiple m of s seconds, sensor node 205 may inspect the “time stamp” field 2120 of each entry 2110 of sensor forwarding table 2105 [step 2730]. For each entry 2110 of sensor forwarding table 2105 that includes a field that was modified more than z seconds in the past, sensor node 205 may set that entry's “valid” field 2145 to zero [step 2735], thus, “aging out” that entry.

Exemplary Sensor Node Beacon Message Processing

FIGS. 28-31 are flowcharts that illustrate exemplary processing, consistent with the present invention, for receiving beacon messages from monitor points/sensor nodes and updating corresponding entries in sensor forwarding table 2105. As one skilled in the art will appreciate, the method exemplified by FIGS. 28-31 can be implemented as a sequence of instructions and stored in memory 1920 of sensor node 205 for execution by processing unit 1915.

To begin processing, sensor node 205 may receive a transmitted beacon message 2300/2600 from either a monitor point 105 or another sensor node [step 2805]. Sensor node 205 may then calculate a checksum of the received beacon message 2300/2600 and compare the calculated checksum value with the message's “checksum” field 2310 [step 2810]. Sensor node 205 determines if the checksums agree, indicating that no errors or corruption of the beacon message 2300/2600 occurred during transmission [step 2815]. If the checksums do not agree, sensor node 205 may discard the message [step 2820]. If the checksums agree, sensor node 205 may inspect sensor forwarding table 2105 for any entries 2110 with the “next hop” field 2135 equal to the received message's “transmitter node ID” field 2305 [step 2825]. Sensor node 205 may then set the “valid” field 2145 equal to zero for all such entries with the “next hop” field 2135 equal to the received message's “transmitter node ID” field 2305 [step 2830].

Sensor node 205 may inspect the received message's “NUM” field 2315 to determine the number of monitor nodes described in the beacon message 2300/2600 [step 2835]. Sensor node 205 may then set a counter value i to 1 [step 2840]. Sensor node 205 may further extract the monitor node “D(i) identifier” field 2605, the “D(i) sequence #” field 2610, and the “# of hops to D(i)” field 2615, corresponding to the counter value i, from beacon message 2300/2600 [step 2905] (FIG. 29).

Sensor node 205 may inspect the “monitor point ID” field 2125 of forwarding table 2105 to determine if there is a table entry 2110 corresponding to the message “D(i) identifier” field 2605 [step 2910]. If no such table entry 2110 exists, sensor node 205 may create a new entry 2110 in forwarding table 2105 for monitor node D(i) [step 2915] and processing may proceed to step 3015 below. If there is a table entry 2110 corresponding to monitor node D(i), sensor node 205 may compare the beacon message “# of hops to D(i)” field 2615 with the “# of Hops” field 2140 in forwarding table 2105 [step 2920]. If the message “# of hops to D(i)” field 2615 is less than, or equal to, the “# of hops” field 2140 of forwarding table 2105, then processing may proceed to step 3015 below. If the message “# of hops to D(i)” field 2615 is greater than the “# of Hops” field 2140, then sensor node 205 may determine if the “valid” field 2145 is set to zero for the table entry 2110 that includes the “monitor point ID” field 2125 that is equal to D(i) [step 2930]. If the “valid” field 2145 is equal to one, indicating that the entry is valid, processing may proceed with step 3115 below. If the “valid” field 2145 is equal to zero, sensor node 205 may then determine if the message “startup number” field 2620 is greater than table 2105's “startup” sub-field of “Seq #” field 2130 [step 3005]. If so, processing may continue with step 3015 below. If not, sensor node 205 may determine if the message “startup number” sub-field 2620 is equal to table 2105's “startup” sub-field of “Seq #” field 2130 and the message “counter number” field 2625 is greater than table 2105's “counter number” sub-field of “Seq #” field 2130 [step 3010]. If not, processing may continue with step 3115 below. If so, sensor node 205 may insert the message's “D(i) sequence #” field 2610 into table 2105's “Seq #” field 2130 [step 3015]. Sensor node 205 may further insert the message's “# of hops to D(i)” field 2615 into the “# of Hops” field 2140 of table 2105 [step 3020]. Sensor node 205 may also insert the message's “transmitter node ID” field 2305 into the “next hop” field 2135 of table 2105 [step 3025).

Sensor node 205 may further set the “valid” flag 2145 for the table entry 2110 corresponding to the monitor point identifier D(i) to one [step 3105] and time stamp the entry 2110 with a local clock, storing the time stamp in “time stamp” field 2120 [step 3110]. Sensor node 205 may increment the counter value i by one [step 3115] and determine if the counter value i is equal to the message's “NUM” field 2315 plus one [step 3120]. If not, processing may return to step 2905. If so, sensor node 205 may set the “Use?” field 2115 for all entries 2110 in forwarding table 2105 to zero [step 3125]. Sensor node 205 may inspect forwarding table 2105 to identify an entry 2110 with the “valid” flag 2145 set to one and that further has the smallest value in the “# of Hops” field 2140 [step 3130]. Sensor node 205 may then set the “Use?” field 2115 of the identified entry to one [step 3135].

Exemplary Sensor Node Datagram Processing

FIG. 32 illustrates a first exemplary message datagram 3200, consistent with the present invention, for transmitting data, such as, for example, sensor measurement data to either a destination monitor point 105 or a destination sensor node 205 in network 100. Message datagram 3200 may include a “source node ID” field 3205, a “destination node ID” field 3210, an optional “checksum” field 3215, an optional “time-to-live” (TTL) field 3220, an optional “geo-location” field 3225, and a “sensor message” field 3230. Message datagram 3200 may also optionally include a “reverse path” flag 3235, a “direction” indicator field 3240, a “# of Node IDs Appended” field 3245, and “1st Hop Node ID” 3250a through “Nth Hop Node ID” fields 3250N. Message datagram 3200 may additionally be used for transmitting data from a source monitor point 105 to a destination sensor node 205 in network 100.

“Source node ID” field 3205 may include an identifier uniquely identifying a sensor node 205 or monitor point 105 that was the original source of message datagram 3200. “Destination node ID” field 3210 may include an identifier uniquely identifying a destination monitor point 105 or sensor node 205 in network 100. “Checksum” field 3215 may include any type of conventional error detection value that can be used to determine the presence of errors or corruption in sensor datagram 3200. “TTL” field 3220 may include a value indicating a number of hops before which the message datagram 3200 should be discarded. “TTL” field 3220 may be decremented by one at each hop through network 100. “Geo-location” field 3225 may include geographic location data associated with the message datagram source node. “Sensor message” field 3230 may include sensor measurement data resulting from sensor measurements performed at the sensor node 205 identified by “source node ID” field 3205.

“Reverse path” flag 3235 may indicate whether sensor datagram 3200 includes information that details the path datagram 3200 traversed from the source node identified in “source node ID” field 3205 to a current node receiving datagram 3200. “Direction” indicator field 3240 may indicate the direction in network 100 that message datagram 3200 is heading. The datagram 3200 direction can be either “inbound” towards a monitor point 105 or “outbound” towards a sensor node 205. “# of Node IDs Appended” field 3245 may indicate the number of sensor nodes described in sensor datagram by 1st through Nth “hop node ID” fields 3250a-3250N. “1st Hop Node ID” field 3250a through “Nth Hop Node ID” field 3250N may include the unique node identifiers identifying each node in the path between the source node indicated by “source node ID” field 3205 and the node currently receiving datagram 3200.

FIG. 33 is a flowchart that illustrates exemplary processing, consistent with the present invention, for fabricating and transmitting a sensor datagram 3200 at a sensor node 205. As one skilled in the art will appreciate, the method exemplified by FIG. 33 can be implemented as a sequence of instructions and stored in memory 1920 for execution by processing unit 1915.

Sensor node 205 may begin fabrication of sensor datagram 3200 by performing sensor measurements over one or more sampling periods using one or more sensor units 1940a-1940n [step 3305]. Sensor node 205 may then insert the sensor measurement data in the “sensor message” field 3230 [step 3310]. Sensor node 205 may further insert the node's own identifier in the datagram 3200 “source node ID” field 3205 [step 3115]. Sensor node 205 may also, optionally, insert a value in the datagram 3200 “time-to-live” field 3220 [step 3320]. Sensor node 205 may, optionally, insert its location in “geo-location” field 3225 [step 3325]. Sensor node 205's location may be determined by geo-location unit 1935.

Sensor node 205 may inspect forwarding table 2105 to identify the table entry 2110 with a “Use?” field 2115 equal to one and with the smallest “# of Hops” field 2140 [step 3330]. The monitor point identified by the “monitor point ID” field 2125 in the table entry 2110 with the “Use?” field 2115 equal to one and with the smallest “# of Hops” field 2140 will, thus, be the nearest monitor point 105 to sensor node 205. Sensor node 205 may insert the “monitor point ID” field 2125 of the table entry 2110 with a “Use?” field 2115 equal to one into datagram 3200's “destination node ID” field 3210 [step 3335].

Sensor node 205 may determine if datagram 3200 will include “reverse path” flag 3235 [step 3405]. In some implementations consistent with the present invention, reverse path information may not be included in any datagram 3200. In other implementations consistent with the present invention, reverse path information may be included in all datagrams 3200. In yet further implementations consistent with the present invention, reverse path information may be included only in some percentage of the datagrams 3200 sent from sensor node 205. For example, reverse path information may only be included in one datagram out of every 100 datagrams or in only one datagram every ten minutes that are sent from a source node.

If datagram 3200 will not include a reverse path flag, processing may continue at step 3425. If datagram 3200 will include a reverse path flag, then sensor node 205 may set the datagram “reverse path” flag 3235 to one, indicating that a reverse path should be accumulated as the datagram 3200 traverses network 100 [step 3410]. Alternatively, sensor node 205 may set the “reverse path” flag 3235 to zero, indicating that no reverse path should be accumulated as the datagram 3200 traverses network 100. Sensor node 205 may further set “direction” field 3240 to “inbound” by setting the value in the field to zero [step 3415]. Sensor node 205 may also set the “# of Node IDs Appended” field 3245 to zero [step 3420].

At step 3425, sensor node 205 may calculate a checksum of the fabricated datagram 3200 and insert the calculated checksum value in “checksum” field 3215 [step 3425]. Sensor node 205 may then transmit datagram 3200 to the next hop node indicated by the “next hop” field 2135 in the table entry 2110 with “Use?” field 2115 set to one and with the smallest “# of Hops” field 2140 [step 3430].

Exemplary Datagram Relaying Processing

FIG. 35-39 are flowcharts that illustrate exemplary processing, consistent with the present invention, for relaying datagrams 3200 received at a sensor node 205 towards either a destination monitor point 105 or other sensor nodes in network 100. As one skilled in the art will appreciate, the method exemplified by FIGS. 35-39 can be implemented as a sequence of instructions and stored in memory 1920 of sensor node 205 for execution by processing unit 1915.

Sensor node 205 may begin processing by receiving a sensor datagram 3200 [step 3505]. Sensor node 205 may then, optionally, calculate a checksum value of the received datagram 3200 and compare the calculated checksum with the “checksum” field 3215 contained in datagram 3200 [step 3510]. Sensor node 205 may determine if the checksums agree [step 3515], and if not, sensor node 205 may discard the received datagram 3200 [step 3520]. If the checksums do agree, sensor node 205 may, optionally, retrieve the “TTL” field 3220 from datagram 3200 and decrement the value by one [step 3525]. Sensor node 205 may then, optionally, determine if the decremented “TTL” value is equal to zero [step 3530]. If so, sensor node 205 may discard the datagram 3200 [step 3520].

If the decremented “TTL” value is not equal to zero, sensor node 205 may determine if datagram 3200 does not contain a “reverse path” flag 3235 or if “reverse path” flag 3235 is set to zero [step 3535]. If datagram 3200 contains a “reverse path” flag that is set to one, processing may continue at step 3705 below. If datagram 3200 does not contain a “reverse path” flag 3235 or the “reverse path” flag 3235 is set to zero, sensor node 205 may retrieve the “destination node ID” field 3210 from datagram 3200 and find a table entry 2110 with the “monitor point ID” field 2125 equal to the datagram “destination node ID” field 3210 [step 3605]. If sensor node 205 finds that there is no table entry 2110 for the monitor point identified by the “destination node ID” field 3210 [step 3610], then sensor node 205 may discard datagram 3200 [step 3615].

If sensor node 205 finds a table entry 2110 for the monitor point identified by the “destination node ID” field 3210, sensor node 205 may, optionally, calculate a new checksum for datagram 3200 [step 3620]. Sensor node 205 may, optionally, insert the calculated checksum in “checksum” field 3215 [step 3625]. Sensor node 205 may further read the “next hop” field 2135 from the table entry 2110 corresponding to the monitor point identified by the datagram “destination node ID” field 3210 [step 3630]. Sensor node 205 may transmit datagram 3200 to the node identified by the “next hop” field 2135 [step 3635].

At step 3705, sensor node 205 may determine if the datagram 3200 “direction” indicator field 3240 indicates that the datagram is heading “inbound.” If not, processing may continue with step 3905 below. If so, sensor node 205 may append sensor node 205's unique identifier to datagram 3200 as a “hop node ID” field 3250 and may increment the datagram “# of Nodes Appended” field 3245 [step 3710]. Sensor node 205 may read the “destination node ID” field 3210 from datagram 3200 [step 3715]. Sensor node 205 may further inspect forwarding table 2105 to locate a table entry 2110 with the “monitor point ID” field 2125 equal to the datagram “destination node ID” field 3210 [step 3720]. Sensor node 205 may then determine if the “valid” field 2145 in the located table entry is zero [step 3725]. If so, sensor node 205 may discard datagram 3200 [step 3730]. If not, sensor node 205 may read the “next hop” field 2135 of the located table entry [step 3805] and may transmit datagram 3200 to the node identified by the “next hop” field 2135 [step 3810].

At step 3905, sensor node 205 may determine if the datagram “destination node ID” field 3210 is equal to sensor node 205's unique identifier [step 3905]. If so, sensor node 205 may read the datagram “sensor message” field 3230 [step 3910]. If not, sensor node 205 may determine if the “# of Node IDs Appended” field 3245 is equal to zero [step 3915]. If so, sensor node 205 may discard datagram 3200 [step 3920]. If not, sensor node 205 may choose the last “hop node ID” 3250 from datagram 3200 as the next hop for the datagram [step 3925]. Sensor node 205 may remove this “Hop node ID” field 3250 from datagram 3200 and decrement the “# of Node IDs Appended” field 3245 [step 3930]. Sensor node 205 may then transmit the datagram to the next hop identified by the removed “hop node ID” field 3250 [step 3935].

Exemplary Monitor Point Datagram Processing

FIGS. 40-43 illustrate exemplary processing, consistent with the present invention, for processing datagrams 3200 received at a monitor point 105. As one skilled in the art will appreciate, the method exemplified by FIGS. 40-43 can be implemented as a sequence of instructions and stored in memory 2020 of monitor point 105 for execution by processing unit 2015.

Monitor point 105 may begin processing by receiving a datagram 3200 from a sensor node 205 in network 100 [step 4005]. Monitor point 105 may, optionally, calculate a checksum value for the datagram 3200 and compare the calculated value with the datagram “checksum” field 3215 [step 4010]. If the checksums do not agree [step 4015], monitor point 105 may discard the datagram 3200 [step 4020]. If the checksums do agree, monitor point 105 may inspect the datagram “source node ID” field 3205 and compare the field with the “sensor ID” field 2215 in all entries 2210 of monitor point table 2205 [step 4025]. If this inspection determines that the source node is unknown, then monitor point 105 may create a new table entry 2210 for the sensor node identified by the datagram “source node ID” field 3205 [step 4035]. Monitor point 105 may further store the datagram “source node ID” field 3205 in the “sensor ID” field 2215 of the newly created table entry [step 4040].

If the source node is known, then monitor point 105 may store the datagram “geo-location “field 3225 in the table “geo-location” field 2220 [step 4045]. Monitor point 105 may further store the datagram “sensor message” field 3230 in the table “sensor message” field 2225 [step 4105]. Monitor point 105 may determine if datagram 3200 includes a “reverse path” flag 3235 [step 4110]. If not, processing may continue at step 4125. If datagram 3200 does include a “reverse path” flag, then monitor point 105 may store the datagram “# of Node IDs Appended” field 3245 in the table “# of Nodes” field 2230 [step 4115]. Monitor point 105 may store the datagram “1st hop node ID” field 3250a through “Nth hop node ID” field 3250N in reverse order in table 2205 by storing in the table “Nth hop” 2235N through “1st hop” fields 2235a [step 4120].

Monitor point 105 may, optionally, retrieve selected data from monitor point table 2205 and exchange data, via network 115, with other monitor points in network 100 [step 4125]. Monitor point 105 may further determine if a datagram 3200 should be sent to a sensor node 205 in network 100 [step 4130]. Monitor point 105 may, for example, periodically send operation control data to a sensor node 205. If no datagram 3200 is to be sent to a sensor node 205, processing may return to step 4005. If a datagram 3200 is to be sent to a sensor node 205, monitor point 105 may insert its own unique identifier in the datagram 3200 “source node ID” field 3205 [step 4205].

Monitor point 105 may insert the table 2205 “sensor ID” field 2215 corresponding to the destination sensor node 205 in the datagram “destination node ID” field 3210 [step 4210]. Monitor point 105 may insert a value in the datagram “TTL” field 3220 [step 4215]. Monitor point 105 may further insert the monitor point's location in the datagram “geo-location” field 3225 [step 4220]. Monitor point 105 may further formulate a sensor message and insert the message in the datagram “sensor message” field 3230 [step 4225]. Monitor point 105 may also set the “direction” indicator field 3240 to “outbound” [step 4230] and may insert the table “# of Nodes” field 2230, corresponding to the table entry 2210 with the appropriate “sensor ID” 2215, into the datagram “# of Node IDs Appended” field 3245 [step 4235].

Monitor point 105 may insert the table “1st Hop” field 2235a through “Nth hop” field 2235N into the corresponding datagram “1st Hop Node ID” 3250a through “Nth Hop Node ID” 3250N fields [step 4305]. Monitor point 105 may calculate a checksum value for the datagram 3200 and insert the calculated value in the datagram “checksum” field 3215 [step 4310]. Monitor point 105 may transmit datagram 3200 to the first hop identified by the datagram “1st Hop Node ID” field 3250a [step 4315].

CONCLUSION

Systems and methods consistent with the present invention, therefore, provide mechanisms that enable sensor node transmitters and receivers to be turned off, and remain in a “sleep” state, for substantial periods, thus, increasing the energy efficiency of the nodes. Systems and methods consistent with the present invention further implement transmission and reception schedules that permit the reception and forwarding of packets containing routing, or other types of data, during short periods when the sensor node transmitters and receivers are powered up and, thus, “awake.” The present invention, thus, increases sensor node operationallife by reducing energy consumption while permitting the reception and forwarding of the routing messages needed to self-organize the distributed network.

The foregoing description of exemplary embodiments of the present invention provides illustration and description, but is not intended to be exhaustive or to limit the invention to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the invention. For example, while certain components of the invention have been described as implemented in hardware and others in software, other hardware/software configurations may be possible. Also, while series of steps have been described with regard to FIGS. 10-16, 24-25, 27-31, and 33-43, the order of the steps is not critical.

No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article “a” is intended to include one or more items. Where only one item is intended, the term “one” or similar language is used. The scope of the invention is defined by the following claims and their equivalents.

Claims

1.-18. (canceled)

19. A method, comprising:

receiving, at a wireless destination node in a wireless multi-node network, a beacon message noting a presence of another node;
broadcasting, at the wireless destination node, a routing message to at least one other node in the wireless multi-node network in response to the received beacon message;
receiving, at the wireless destination node, an additional message comprising: a unique identifier for a wireless source node, and sequencing data indicating a sequence of the message;
extracting distance data indicating a number of hops to reach the wireless destination node from the wireless source node in the wireless network; and
updating a forwarding table, wherein the forwarding table includes identifiers for neighboring sensor nodes.

20. The method of claim 19, further comprising:

organizing nodes in the network into a hierarchy of tiers.

21. The method of claim 20, wherein the at least one other node resides in a higher tier than the destination node.

22. The method of claim 20, wherein the additional message is destined for a data collection point residing in a lowest tier of the network.

23. A node, comprising:

a receiver configured to receive, at a wireless destination node in a wireless multi-node network, a beacon message noting a presence of another node;
a transmitter configured to broadcast, at the wireless destination node, a routing message to at least one other node in the wireless multi-node network in response to the received beacon message;
wherein the node is operable such that the receiver is further configured to receive, at the wireless destination node, an additional message comprising: a unique identifier for a wireless source node; and sequencing data indicating a sequence of the message;
a processing unit configured to: extract distance data indicating a number of hops to reach the wireless destination node from the wireless source node in the wireless network, and update a forwarding table, wherein the forwarding table includes identifiers for neighboring sensor nodes.

24. A computer program product on a non-transitory computer-readable medium, comprising:

computer code for receiving, at a wireless destination node, a beacon message noting a presence of another node;
computer code for broadcasting, at the wireless destination node, a routing message to at least one other node in a wireless multi-node network in response to the received beacon message;
computer code for receiving, at the wireless destination node, an additional message comprising: a unique identifier for a wireless source node, and sequencing data indicating a sequence of the message;
computer code for extracting distance data indicating a number of hops to reach the wireless destination node from the wireless source node in the wireless multi-node network; and
computer code for updating a forwarding table, wherein the forwarding table includes identifiers for neighboring sensor nodes.

25.-34. (canceled)

35. The node of claim 23, wherein the wireless multi-node network is an ad hoc network.

36. The node of claim 23 wherein the wireless multi-node network is a wireless sensor network.

37. The node of claim 23, wherein the additional message further comprises data for detecting errors in the additional message.

38. The node of claim 37, wherein the data for detecting errors comprises a message checksum value.

39. The node of claim 23, wherein the wireless source node is a sensor node.

40. The node of claim 39, wherein the processing unit is further configured to perform a plurality of measurements over a sampling period.

41. The node of claim 40, wherein the processing unit is further configured to receive, from the wireless source node by the wireless destination node, the plurality of measurements via a plurality of packets.

42. The node of claim 23, wherein the additional message is a packet.

43. The node of claim 23, wherein the wireless multi-node network is a packet-switched network.

44. The node of claim 23, wherein the receiver is configured to receive the additional message in accordance with a Time Division Multiple Access plan.

45. The node of claim 23, wherein the wireless destination node comprises a processor, a clock, and a power supply.

46. The node of claim 45, wherein the clock of the wireless destination node is synchronized to an external clock base.

47. The node of claim 23 wherein the wireless destination node is equipped to reduce clock drift.

48. The node of claim 23, wherein the additional message is transmitted within pre-defined time slots.

49. The node of claim 23, wherein the wireless multi-node network includes a pseudo-random number generator.

50. The node of claim 49, wherein the additional message is encrypted for improving network security, and wherein the pseudo-random number generator facilitates the encryption of the message.

51. The node of claim 23, wherein the additional message is received via an intermediary node.

52. The node of claim 23, wherein the wireless source node includes at least one of random access memory (RAM) and read only memory (ROM).

53. The node of claim 23, wherein the wireless multi-node network includes a database.

54. The node of claim 23, wherein the wireless multi-node network is a radio frequency network.

55. The node of claim 23, wherein the wireless source node is portable.

56. The node of claim 23, wherein the wireless source node is permitted to join and leave the wireless multi-node network.

57. The node of claim 23, wherein the wireless source node is an endpoint device.

58. The node of claim 23, wherein the receiver is further configured to receive, by the wireless source node, an ACK signal upon sending of the message.

59. The node of claim 23, wherein the processing unit is further configured to listen for a beacon.

60. The node of claim 59, wherein the beacon is a wireless broadcast message.

61. The node of claim 59, wherein the beacon is transmitted at preset times.

62. The node of claim 23, wherein the additional message further comprises a unique identifier for the wireless destination node.

63. The node of claim 23, wherein the wireless destination node includes a data buffer.

64. The node of claim 23, wherein the additional message includes a header.

65. The node of claim 23, wherein the sequencing data comprises a counter number.

66. The node of claim 23, wherein the processing unit is further configured to store, in the forwarding table, the unique identifier for the wireless source node.

67. The node of claim 23, wherein the forwarding table includes a plurality of entries.

68. The node of claim 23, wherein the forwarding table is stored in the wireless destination node.

69. The node of claim 23, wherein the processing unit is further configured to inspect, by the wireless source node, the forwarding table for identifying a table entry.

70. The node of claim 69, wherein the processing unit is further configured to transmit a subsequent message based on the identifying table entry.

Patent History
Publication number: 20150029914
Type: Application
Filed: Aug 6, 2009
Publication Date: Jan 29, 2015
Inventors: Brig Barnum Elliott (Arlington, MA), David Spencer Pearson (Bennington, VT)
Application Number: 12/537,085
Classifications
Current U.S. Class: Signaling For Performing Battery Saving (370/311); Power Conservation (455/574)
International Classification: G08C 17/00 (20060101);