System and Method for Patient Management Using Multi-Dimensional Analysis and Computer Vision
The disclosed embodiments include a system and method for patient management using multi-dimensional analysis and computer vision. The system includes a base unit having a microprocessor connected to a camera and a beacon detector. The beacon detector scans for advertising beacons with a packet and publishes a packet to the microprocessor. The camera captures an image in a pixel array and publishes image data to the microprocessor. The microprocessor uses at least a Beacon ID from the packet to determine if an object is in a room, and Camera Object Coordinates from the image data to determine the coordinates of the object in the room.
The present invention relates generally to a human presence detection, identification, recording and differentiation apparatus, and more particularly to a wireless, wall-mounted device that can detect, identify, locate, record, interpret, and differentiate human presence using a camera and beacon detector.
2. BACKGROUND OF ARTIt is common today that elderly individuals or individuals with chronic illness or disabilities obtain specialized care as a patient in a long-term healthcare facility. Often, the family members of the patients feel disconnected from the day-to-day care that the patient receives. As such, many family members of patients express concern that the vulnerable patient may be neglected or abused. Even in a more privatized healthcare setting, such as a where a patient has a personal at-home nurse, harm to the patient is still a concern to family members.
Traditionally, hospitals and other healthcare facilities have attempted to resolve patient neglect problems through the installation of cameras for video monitoring around hospital rooms and care facilities. However, video monitoring and other human-monitored imaging systems in the patient room are not realistic solutions to the problem as they encroach on the privacy of the patient and require vigilant and time-consuming review by family members. Outside the patient care setting, imaging systems have been utilized to track people due to their simple lens, which can be used to precisely aim receivers at an area of interest. In addition, imaging systems are insensitive to body positioning if they are not specifically programmed for facial identification and can thus track individuals despite numerous small body movements. Also, imaging systems are useful to track people because visible or near-visible light does not bleed through walls. However, imaging systems have many limitations. For example, reliably identifying an individual from an image is a difficult problem. In another example, imaging systems pose potential privacy problems, as similarly stated above, especially if the system captures identifiable information about people who have not consented to being tracked.
Alternative monitoring systems have used real-time locating systems (RTLS) where objects and consenting individuals are given tracking devices, such as Bluetooth Low Energy (BLE) beacons, infrared transmitters, and passive RFID. RTLS systems have a distinct advantage over camera based systems at identifying tracked objects as the ID of the person or object is digitally encoded in the data transmitted by the device.
However, achieving precise locating information from RTLS systems traditionally involves substantial tradeoffs. RTLS systems often require multiple pieces of installed infrastructure that require ceiling mounting and cabling to enable the triangulation or trilateration required to achieve meter- or sub-meter level accuracy. Furthermore, the potential for measuring more detail such as a person's skeletal orientation from RTLS is not possible without adding devices to the person. In addition, the tracked devices themselves often employ proprietary technology that imposes high cost and locks in users.
Therefore, there is a need for a system that can accurately and precisely identify and locate objects and consenting people without compromising the privacy of non-consenting individuals and without locking in hospitals to expensive, proprietary infrastructure.
SUMMARY OF THE INVENTIONIn accordance with the foregoing objects and advantages, various aspects and embodiments of the present invention provide a system and method for patient management using multi-dimensional analysis and computer vision.
In one embodiment, the system includes a base unit having a microprocessor connected to a camera and a beacon detector. The beacon detector scans for advertising beacons with a packet and publishes the packet to the microprocessor. The camera captures an image in a pixel array and publishes image data to the microprocessor. The microprocessor uses at least a Beacon ID from the packet to determine if an object is in a room, and Camera Object Coordinates from the image data to determine the coordinates of the object in the room.
In another embodiment, the method for detecting body position in a confined space includes the steps of providing a base unit having a microprocessor connected to a beacon detector and a camera. The beacon detector receives an advertisement from a beacon with a packet having at least a Beacon ID. The microprocessor determines if an object is in the confined space based at least in part on the Beacon ID. The camera captures a pixel array of the confined space, which is processed into imaging data. The microprocessor receives the imaging data and determines the coordinates of the object in the confined space using Camera Object Coordinates from the imaging data.
In another embodiment, the method above further includes the steps of providing a second base unit having a microprocessor connected to a beacon detector and a camera to detect body position in a second confined space. The second base unit receives an advertisement from a beacon with a packet at the beacon detector and determines, via the microprocessor, if the object is in the second confined space based at least in part on the Beacon ID from the packet. The camera captures a pixel array of the second confined space, which is processed into imaging data. The microprocessor receives the imaging data and determines the coordinates of the object in the second confined space using Camera Object Coordinates from the imaging data. Finally, the system determines a current state for each Beacon ID of the BLE beacon across the first base unit and the second base unit.
In an alternative embodiment, the method for automating the measurement and feedback of patient care, includes the step of first, providing a system having a base unit with a microprocessor connected to: (i) a camera, which transmits coordinate information to the microprocessor, (ii) a sensor, and (iii) a beacon detector, which scans for advertising beacons and publishes a packet to the microprocessor. Next, the method includes the steps of correlating data from the sensor, presence information, and coordinate information to determine a patient care event, and publishing the patient care event to a web application interface. Such patent care event publications can be used to assign the patient care event to a healthcare provider and determine a rate of compliance based on the number of patient care events per unit of time.
In another embodiment, the method for inferring patient care activity from sensor data, includes the steps of first, providing a system with a base unit having a microprocessor connected to: (i) a camera, which transmits coordinate information to the microprocessor, (ii) a sensor, and (iii) a beacon detector, which scans for advertising beacons and publishes a packet to the microprocessor, and a label database, which stores electronic record data. Next, the method includes the steps of creating a scene model for the electronic record data via a scene classifier trainer, receiving a feature comprised of presence information, data from the sensor, and coordinate information, and classifying the feature through comparison of the feature to the scene model.
Embodiments and aspects of the present invention will be more fully understood and appreciated by reading the following Detailed Description in conjunction with the accompanying drawings, in which:
Referring to the Figures, the present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
An embodiment of the invention is a system comprising a wireless, wall-mounted base unit that can detect, identify, locate, record, and differentiate activities involving humans and objects equipped with machine identifiable devices such as beacons. The present invention also offers a mobile application platform that allows health facility unit managers analyze real-time data on safety protocols, analyze patient demand, create and view staff assignments (past and present), and enables family members to know when a loved one is receiving professional care.
Referring first to
As described later, the microprocessor 102 uses data received from the camera 106, beacon detector 108, and sensors 112 to identify individuals in a room and determine their body positioning. These listed features are connected to the microprocessor 102 via USB or 12C. In addition, the base unit 100 comprises a spiral antenna 126 with a ground plane connected to the beacon detector 108 to focus reception on the bed area in front of the base unit. The base unit 100 also includes a Wi-Fi module 110 and antenna 126. However, alternative antennae 126 can be substituted for the transmission and reception of digital data for both the beacon detector 108 and Wi-Fi module 110.
Referring back to
Referring briefly to
In another embodiment, the call bell detector 118 of the base unit 100 may receive a BLE signal from a restroom pull-cord call bell system 800 (hereinafter “pull-cord system”), shown in
The sensors 112 shown in
Patients, healthcare providers, and other individuals in the hospital or healthcare facility room also transmit data to the base unit 100 via the beacon detector 108. For example, each healthcare provider may have a wearable beacon 400 that transmits a personal identifier to the beacon detector 108 therefore notifying the positioning system 10 that a particular healthcare provider has entered the room. An illustrative embodiment of a beacon 400 of the positioning system 10 is shown in
Referring now to
The base unit 100 continuously scans for advertising beacons 400 and publishes received packets with “BEACON_ID”, “BEACON_INDICATOR”, and “BEACON_ACTIVITY” to the feature collector 120. In one embodiment, the beacons 400 and the beacon detector 108 use Bluetooth Low Energy (BLE) and the base unit 100 is connected to a directional antenna aimed at a patient bed area 300 to detect BLE beacons 400, as shown in
Referring now to
Referring back to
At a first stage shown in
At a second stage shown in
In an alternative embodiment shown in
According to another embodiment, the low level classifiers are fused using probabilistic graphical model. A neural network is used to perform multi-modal sensor fusion with an input of [Beacon, Camera] and an output of [known coordinates as measured from reference system]. At the conclusion of the second stage, in all of the embodiments described above, the system 10 outputs an array of [Time, BEACON_ID, State: In or Out, Activity: Moving or Stationary, Coordinate, Confidence Value 0-1] for each base unit 100 (shown in
At the third stage, the cloud platform 200 connected to the base unit 100 disambiguates input from the high level classifier for each base unit 100, as shown in
At the final stage, there is activity identification, as shown in
Referring back to
From the scene classifier 202, the activities are analyzed in an analytics module 206 using algorithms stored in the analytics database 208. Exemplary key metrics that are calculated in the analytics module include the frequency of visits, the duration of visits, an hourly rounding rate, a bedside reporting rate, a patient turning rate, and patient ambulation rate. These key metrics can be broken down and calculated for each staff role, patient, unit, department, day of the week, or time of day. After analysis, the resulting data is transmitted to the reporting API module 210, which uses a publish-subscribe mechanism such as MQTT, for example, where data from a reporting database 212 determines how the data will be reported to a mobile application 214 or web application 216. Once the data is transmitted to a mobile application 214 or web application 216, such as a restful HTTP interface, a user can view the data at the interface.
Still referring to
The scene classifier trainer 500 then uses data from the label database 502 and the feature database 504 in model training 506 to create and store scene models 508. A scene model may include topic models, such as hierarchical latent Dirichlet allocation (LDA), Hidden Markov models, and neural networks. Model training 506 includes machine learning methods, such as back propagation, feed forward propagation, gradient descent, and deep learning. Therefore, future features from the feature extractor 120 may be more quickly classified into the appropriate scene using a scene model 508 created using the same or similar past features.
With reference to
In one embodiment, all functionalities of the base unit 100 are controlled via Bluetooth through an authorized smartphone device or other mobile device. Such components eliminate the need to touch the base unit 100 directly, which is critical for a device that is mounted to hospital and healthcare facility walls, as shown in
Referring again to
By capturing only the outline of an individual, the positioning system drastically reduces processing capacity. Further, the analytics database 208 (of
The cloud platform provides a pathway to push data to users' mobile devices. With reference to
In
Referring now to
Finally, referring now to
While embodiments of the present invention has been particularly shown and described with reference to certain exemplary embodiments, it will be understood by one skilled in the art that various changes in detail may be effected therein without departing from the spirit and scope of the invention as defined by claims that can be supported by the written description and drawings. Further, where exemplary embodiments are described with reference to a certain number of elements it will be understood that the exemplary embodiments can be practiced utilizing either less than or more than the certain number of elements.
The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
Claims
1. A positioning system, comprising:
- a base unit having a microprocessor connected to a camera and a beacon detector, which scans for advertising beacons and publishes a packet to the microprocessor;
- wherein the camera captures an image in a pixel array and publishes image data to the microprocessor;
- wherein the microprocessor uses at least a Beacon ID from the packet in a threshold algorithm to determine if an object is in a room, and camera object coordinates from the image data in a camera location algorithm to determine the coordinates of the object in the room.
2. The positioning system of claim 1, further comprising one or more sensors connected to the microprocessor.
3. The positioning system of claim 2, wherein the sensors detect at least one of: sound level, ambient light, temperature, actuation of a pull cord, and actuation of a call bell.
4. The positioning system of claim 1, further comprising a display connected to the microprocessor.
5. The positioning system of claim 1, further comprising a directional antenna connected to the base unit and aimed at a patient bed area.
6. The positioning system of claim 1, wherein the base unit is stadium-shaped.
7. A method for detecting body position in a confined space, comprising the steps of:
- providing a base unit having a microprocessor connected to a beacon detector and a camera;
- receiving, at the beacon detector, an advertisement from a beacon with a packet comprising at least a Beacon ID;
- determining, via the microprocessor, if an object is in the confined space with a threshold algorithm with at least the input of the Beacon ID;
- capturing, via the camera, a pixel array of the confined space, which is processed into imaging data
- receiving, at the microprocessor, the imaging data; and
- determining, via the microprocessor, coordinates of the object in the confined space using Camera Object Coordinates from the imaging data in a camera location algorithm.
8. The method of claim 7, further comprising the step of classifying, via the microprocessor, an activity based on the Beacon ID from the packet of the beacon and the Camera Object Coordinates determined from the image data.
9. The method of claim 8, further comprising the step of publishing, via the microprocessor, a command to a display connected thereto to indicate.
10. The method of claim 7, further comprising the step of determining, via the microprocessor, a Camera Object Variance of the object in the confined space from the imaging data.
11. The method of claim 10, wherein a high Camera Object Variance indicates the object is moving and a low Camera Object Variance indicates the object is stationary.
12. A computer program product detecting body position in two confined spaces, the computer program comprising a computer readable storage medium having program instructions embodied therewith, wherein the computer readable storage medium is not a transitory signal per se, the program instructions are readable by a computer to cause the computer to perform a method comprising the steps of:
- providing a first base unit and a second base unit, each having a microprocessor connected to a beacon detector and a camera;
- wherein the first base unit is in a first confined space and the second base unit is in a second confined space;
- at the first base unit: receiving, at the beacon detector, an advertisement from a beacon with a packet having a Beacon ID; determining, via the microprocessor, if an object is in the first confined space with a threshold algorithm with at least the input of the Beacon ID; capturing, via the camera, a pixel array of the first confined space, which is processed into imaging data; receiving, at the microprocessor, the imaging data; and determining, via the microprocessor, coordinates of the object in the first confined space using Camera Object Coordinates from the imaging data in a camera location algorithm;
- at the second base unit: receiving, at the beacon detector, an advertisement from a beacon with a packet having a Beacon ID; determining, via the microprocessor, if an object is in the second confined space with a threshold algorithm with at least the input of the Beacon ID; capturing, via the camera, a pixel array of the second confined space, which is processed into imaging data; receiving, at the microprocessor, the imaging data; and determining, via the microprocessor, coordinates of the object in the second confined space using Camera Object Coordinates from the imaging data in a camera location algorithm.
13. The computer program product of claim 12, further comprising the steps of determining a current state for each Beacon ID of the beacon across the first base unit and the second base unit.
14. The computer program product of claim 12, further comprising one or more sensors connected to the microprocessor of both the first base unit and the second base unit.
15. The computer program product of claim 14, wherein the sensors detect at least one of: sound level, ambient light, temperature, actuation of a restroom pull cord, and actuation of a call bell.
16. A method for automating the measurement and feedback of patient care, comprising:
- providing a system with a base unit having a microprocessor connected to: (i) a camera, which transmits coordinate information to the microprocessor, (ii) a sensor, and (iii) a beacon detector, which scans for advertising beacons and publishes a packet to the microprocessor;
- correlating data from the sensor, presence information, and coordinate information to determine a patient care event; and
- publishing the patient care event to a web application interface.
17. The method of claim 16, further comprising the step of assigning, via the web application, the patient care event to a healthcare provider.
18. The method of claim 16, further comprising the step of determining a rate of compliance based on the number of patient care events per unit of time.
19. A method for inferring patient care activity from sensor data, comprising:
- providing a system with a base unit having a microprocessor connected to: (i) a camera, which transmits coordinate information to the microprocessor, (ii) a sensor, and (iii) a beacon detector, which scans for advertising beacons and publishes a packet to the microprocessor;
- creating a scene model for the electronic record data via a scene classifier trainer;
- receiving a feature comprised of presence information, data from the sensor, and coordinate information; and
- classifying the feature through comparison of the feature to the scene model.
20. The method of claim 19, further comprising the steps of adjusting the scene model based on the frequency of the features associated with the scene model.
Type: Application
Filed: Jun 9, 2017
Publication Date: Dec 13, 2018
Applicant: All Inspire Health, Inc. (Dover, DE)
Inventors: Michael Y. Wang (New York, NY), Vincent James Cocito (Maplewood, NJ), Paul Ensom Coyne (Hoboken, NJ), Jeffrey Morelli (Westfield, NJ)
Application Number: 15/618,357