SYSTEMS AND METHODS FOR TASK PREDICTION AND CONSTRAINT MODELING

A method is disclosed, and a system for performing the method, the method comprising: obtaining a set of operations data for a facility, the set of operations data including a plurality of tasks scheduled to be completed, and a plurality of operational constraints; generating an optimization model based on the set of operations data, wherein the optimization model defines a plurality of variables corresponding to operations of the facility and the plurality of operational constraints; calculating, using the optimization model, a set of solution values, wherein each of the set of solution values corresponds to one or more of the plurality of variables; and transmitting the set of solution values to a scheduling device configured to generate a task schedule for the facility, wherein altering the value of one of the operational constraints alters the optimization model and an altered set of solution values is calculated.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
RELATED APPLICATION

This application claims the benefit of priority to U.S. Provisional Patent Application No. 63/377,549, filed Sep. 29, 2022, the entirety of which is hereby incorporated by reference.

TECHNICAL FIELD

The present disclosure relates generally to methods and systems to optimize operations in a workplace such as a warehouse, distribution center, airport ground operations, and retail generally.

BACKGROUND

In a business operations environment such as a warehouse, distribution center, retail or airport ground operations, task scheduling is a critical component to efficient use of resources and time. Whether done manually or through an automated system, scheduling tasks requires an estimate of required time and resources for each task, and an accounting of how a delay in one task will impact any number of other tasks. Task scheduling often involves the conservative practice of planning more time than is deemed necessary to account for the various constraints that may be added in real-time, such as unforeseen equipment unavailability, or incidents or accidents causing shut downs or labor shortages. It would be advantageous to more accurately predict and anticipate the time required for a set of tasks to be completed, to determine in real-time how a constraint alters the time required for the tasks to be completed, and to be able to dynamically re-allocate resources to efficiently operate in view of constraints.

This disclosure is directed to addressing above-referenced challenges. The background description provided herein is for the purpose of generally presenting the context of the disclosure. Unless otherwise indicated herein, the materials described in this section are not prior art to the claims in this application and are not admitted to be prior art, or suggestions of the prior art, by inclusion in this section.

SUMMARY

A computer-implemented method of task scheduling and process control is disclosed, the method comprising: obtaining, by a system comprising at least one processor, a set of operations data for a facility, the set of operations data including a plurality of tasks scheduled to be completed, and a plurality of operational constraints; generating, by the system, an optimization model based on the set of operations data, wherein the optimization model defines a plurality of variables corresponding to operations of the facility and the plurality of operational constraints; calculating, by the system using the optimization model, a set of solution values, wherein each of the set of solution values corresponds to one or more of the plurality of variables; and transmitting the set of solution values to a scheduling device configured to generate a task schedule for the facility.

Also disclosed is a non-transitory computer readable medium storing instructions that, when executed by one or more processors, cause the one or more processors to perform a method of task scheduling and process control, comprising: obtaining a set of operations data for a facility, the set of operations data including a plurality of tasks scheduled to be completed, and a plurality of operational constraints; generating an optimization model based on the set of operations data, wherein the optimization model defines a plurality of variables corresponding to operations of the facility and the plurality of operational constraints; calculating, using the optimization model, a set of solution values, wherein each of the set of solution values corresponds to one or more of the plurality of variables; and transmitting the set of solution values to a scheduling device configured to generate a task schedule for the facility.

Further disclosed is a system for task scheduling and process control, comprising a processor configured to: obtain a set of operations data for a facility, the set of operations data including a plurality of tasks scheduled to be completed, and a plurality of operational constraints; generate an optimization model based on the set of operations data, wherein the optimization model defines a plurality of variables corresponding to operations of the facility and the plurality of operational constraints; calculate, using the optimization model, a set of solution values, wherein each of the set of solution values corresponds to one or more of the plurality of variables; and transmit the set of solution values to a scheduling device configured to generate a task schedule for the facility.

To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the appended drawings, including the appendix attached to this disclosure including other examples of the herein disclosed solution and which is incorporated by reference in its entirety as if set forth verbatim here. These aspects are indicative, however, of but a few of the various ways in which the principles of the claimed subject matter may be employed and the claimed subject matter is intended to include all such aspects and their equivalents. Other advantages and novel features may become apparent from the following detailed description when considered in conjunction with the drawings.

BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments of the disclosure will now be described, by way of example only, with reference to the accompanying drawings in which:

FIG. 1 is a schematic diagram illustrating an example environment implementing methods and systems of this disclosure.

FIG. 2 is a diagram of architecture of a connected warehouse system of this disclosure.

FIG. 3 is a flowchart illustrating a method for optimizing operations of a job site.

FIG. 4 is an exemplary user interface of a task overview dashboard.

FIG. 5 is an exemplary task estimate interface.

FIG. 6A is a first exemplary optimized task estimate interface.

FIG. 6B is a second exemplary optimized task estimate interface.

FIG. 7 depicts an example user interface dashboard in a first mode, according to an exemplary embodiment.

FIG. 8 is a flowchart illustrating a method for managing unplanned tasks, according to an exemplary embodiment.

FIG. 9 is a diagram of architecture of a connected warehouse system of this disclosure.

FIG. 10 is a diagram of architecture of a connected warehouse system of this disclosure.

FIG. 11 depicts a schematic block diagram of a framework of a platform of a connected warehouse system.

FIG. 12A depicts an exemplary diagram of a data flow of a connected warehouse, according to one or more embodiments.

FIG. 12B depicts an exemplary diagram of a data flow of a connected warehouse, according to one or more embodiments.

FIG. 13 illustrates an exemplary device in which one or more embodiments may be implemented.

DETAILED DESCRIPTION

The following embodiments describe systems and methods for facilitating a connected warehouse as between employees, managers, and other users. In particular, the following embodiments are directed to systems and methods for creating and organizing tasks for operating a warehouse.

In a business operations environment such as a warehouse, distribution center, retail or airport ground operations, task scheduling is a critical component to efficient use of resources and time. Whether done manually or through an automated system, scheduling tasks requires an estimate of required time and resources for each task, and an accounting of how a delay in one task will impact any number of other tasks. Task scheduling often involves the conservative practice of planning more time than is deemed necessary to account for the various constraints that may be added in real-time, such as unforeseen equipment unavailability, or incidents or accidents causing shut downs or labor shortages. It would be advantageous to more accurately predict and anticipate the time required for a set of tasks to be completed, to determine in real-time how a constraint alters the time required for the tasks to be completed, and to be able to dynamically re-allocate resources to efficiently operate in view of constraints.

To this end, a dynamic and decentralized technique for implementing a connected warehouse system is provided. An embodiment or implementation described herein as “dynamic” is intended to reflect or indicate that the embodiment(s) is or can be marked by continuous and productive activity or change, though not necessarily constantly changing. The system and corresponding techniques facilitate communications within one or more warehouses, between users (e.g., worker, teams of workers, manager, etc.), and between warehouses, third parties associated therewith, and data centers. Such communications may be facilitated by edge systems and gateway systems. The edge and gateway systems may be located in warehouses (i.e., on-site) as embedded or fixed systems and/or other user devices such as tablet PCs and mobile phones (e.g., devices controlled by or in communication with an operations manager, etc). Each edge system may be coupled to a warehouse system from which warehouse operations data may be collected, and in communication with other edge systems and gateway systems. Each gateway system may be in communication with warehouse operation systems and edge systems of the warehouse in which the gateway system is resident (e.g., with the operations manager), and may also be in communication with gateway systems located in other warehouses, all or some of which may provide data to the gateway system. By facilitating communication with gateway systems located in other warehouses, the gateway system may enable exchange of data among edge systems installed in different warehouses. Independent user computing devices, such as tablet PCs and mobile phones, may be directly coupled to and/or in communication with the edge systems and/or gateway systems, to request, filter, view, and/or analyze data.

Hardware for all or some of the edge systems and gateway systems may be installed in warehouses. Therefore, software may be installed on the corresponding warehouse hardware. The software implemented in the edge systems and gateway systems may comprise computer-executable code for performing various data functions, including but not limited to, data request, data query, data retrieval, data transmission, and data analytics. The edge systems and gateway systems each identify source(s) of relevant data, and request that data be provided dynamically (as needed) or statically (all the time) from the identified source(s), such as from other edge systems coupled to warehouse systems in the warehouse or other warehouses, gateway systems in the warehouse or other warehouses, decentralized system(s) such as cloud computing center(s), and centralized system(s) such as dedicated server farms. The decentralized system(s) and centralized system(s) may be owned by the operators of the warehouses, or by a third party such as a government or a commercial entity.

Each edge system in a warehouse may be coupled to a sensor of a corresponding warehouse system in the same warehouse, enabling data captured by the sensor to be provided directly to the edge system. Also, a gateway system in a warehouse may be coupled to one or more sensors of warehouse systems in the same warehouse, enabling data captured by the one or more sensors to be provided directly to the gateway system. In another embodiment, each edge system in a warehouse may be coupled to warehouse system of a corresponding warehouse system in the same warehouse. Also, a gateway system in a warehouse may be coupled to warehouse system machines of warehouse systems in the same warehouse. In some aspects, warehouse system machines may be configured to collect data from the coupled one or more sensors, perform computations and/or analysis of the collected data, store the collected and/or analyzed data in memory, and provide the collected and/or analyzed data to one or more connected edge systems and/or gateway system. In some embodiments, the warehouse system may not be implemented, or may not be coupled to the one or more sensors of the warehouse system. If the warehouse system machine is not implemented or not coupled to the one or more sensors, data captured by the one or more sensors may be provided directly to the one or more connected edge systems and/or gateway system.

Each warehouse system may be in communication with, through an edge system or not, a gateway system. Edge systems in a warehouse may be in direct communication with one another. For example, any data retained by one edge system may be transmitted directly to another edge system within the same warehouse, without a gateway system acting as an intermediary. In another embodiment, an edge system may send to or receive data from another edge system located in the same warehouse through a gateway system. The communication between the edge systems and the communication between the edge systems and the gateway system may be through a wired or wireless connection.

A gateway system of a warehouse may be in communication with gateway systems of other warehouses. Through this communication path, an edge system or a gateway system of a warehouse may transmit data to and obtain data from edge systems or gateway systems of other warehouses. The communication path between gateway systems of different warehouses may be through satellite communications (e.g., SATCOM), cellular networks, Wi-Fi (e.g., IEEE 802.11 compliant), WiMAx (e.g., AeroMACS), optical fiber, and/or air-to-ground (ATG) network, and/or any other communication links now known or later developed. An edge system in a warehouse may communicate with another edge system in a different warehouse via gateway systems of the respective warehouses. For example, an edge system in a warehouse may transmit data to one or more edge systems in other warehouses via the gateway systems of the respective warehouses communicating over the communication path discussed above.

Each edge system and gateway system may comprise state machines, such as processor(s) coupled to memory. Both the edge systems and the gateway systems may be configured with a common operating system to support portable, system-wide edge software implementations. In other words, each of the edge systems and the gateway systems may be equipped with standard software to facilitate inter-operability among the edge systems and the gateway systems. In the discussion below, such software will be referred to as edge software. The edge software may enable each edge system or gateway system to perform various functions listed below (non-exhaustive) to enable data analysis and data exchange among the various systems illustrated herein (e.g., edge systems, gateway systems, warehouse operations centers, remote systems):

    • Filter and analyze real-time and stored data collected from other edge systems, warehouse systems, gateway systems, and/or operations center(s), and generate events based on the analysis;
    • Identify dynamic (i.e., as needed) and static (i.e., all the time) data transmission targets (e.g., edge systems within the same warehouse, edge systems in other warehouses, operations center(s));
    • Transmit data over an Internet connection to the operations centers;
    • Provide a request/response interface for other edge/gateway systems, warehouse borne computer systems, operations centers, and remote systems connected over wired/wireless networks or Internet to query the stored data and to dynamically select/change data filters;
    • Use request/response interfaces provided by other edge systems, gateway systems, and operations centers connected over wired/wireless networks or Internet to obtain data and to dynamically select/change data filters;
    • Receive events from other edge systems, gateway systems, and operations centers; and
    • Specify and communicate generic purposes (i.e., types of data the edge/gateway system is interested in) to other edge systems, gateway systems, and operations centers.

Each edge system or gateway system may autonomously select and deliver data to one or more transmission targets, which may be other edge systems in the same warehouse, edge systems in other warehouses, gateway system in the same warehouse, gateway systems in other warehouses, or operations center(s). Each of the receiving edge or gateway systems (i.e., transmission targets) may be configured to filter the received data using a pre-defined filter, overriding the autonomous determination made by the edge system transmitting the data. In some embodiment, each receiving edge or gateway system may notify the other systems, in advance of the data transmission, of the types of data and/or analysis the receiving system wants to receive (i.e., generic “purposes”). Also, each edge or gateway system may maintain a list including static data transmission targets (transmission targets that always need the data) and dynamic data transmission targets (transmission targets that need the data on as-needed basis).

A gateway system of a warehouse may also be in communication with one or more operations centers, which may be located remotely from the warehouse (i.e., off-site). In some embodiments, however, the operations center(s) may be located on-site at the warehouse. Each of the warehouse systems of this disclosure may be implemented in a dedicated location, such as a server system, or may be implemented in a decentralized manner, for example, as part of a cloud system. The communication path between the gateway systems and the operations center(s) may be through satellite communications (e.g., SATCOM), cellular networks, Wi-Fi (e.g., IEEE 802.11 compliant), WiMAx (e.g., AeroMACS), optical fiber, and/or air-to-ground (ATG) network, and/or any other communication links now known or later developed.

Subject matter will now be described more fully hereinafter with reference to the accompanying drawings, which form a part hereof, and which show, by way of illustration, specific exemplary embodiments. An embodiment or implementation described herein as “exemplary” is not to be construed as preferred or advantageous, for example, over other embodiments or implementations; rather, it is intended reflect or indicate that the embodiment(s) is/are “example” embodiment(s). Subject matter be embodied in a variety of different forms and, therefore, covered or claimed subject matter is intended to be construed as not being limited to any exemplary embodiments set forth herein; exemplary embodiments are provided merely to be illustrative. Likewise, a reasonably broad scope for claimed or covered subject matter is intended. Among other things, for example, subject matter may be embodied as methods, devices, components, or systems. Accordingly, embodiments may, for example, take the form of hardware, software, firmware or any combination thereof (other than software per se). Furthermore, the method presented in the drawings and the specification is not to be construed as limiting the order in which the individual steps may be performed. The following detailed description is, therefore, not intended to be taken in a limiting sense.

Throughout the specification and claims, terms may have nuanced meanings suggested or implied in context beyond an explicitly stated meaning. Likewise, the phrase “in one embodiment” as used herein does not necessarily refer to the same embodiment and the phrase “in another embodiment” or “in some embodiments” as used herein does not necessarily refer to a different embodiment. It is intended, for example, that claimed subject matter include combinations of exemplary embodiments in whole or in part.

FIG. 1 illustrates an exemplary warehouse and/or distribution center environment 100 with certain components, including delivery transportation 105 (e.g., supply chain delivery truck) to load into inventory 108. An operational control tower 112 may monitor and/or otherwise control operations 110 within environment 100. Operations 110 can be performed and/or managed by labor 109. Operations 110 can include loading 101 and assembly machines 107. Once assembled, packaged, and otherwise processed for distribution, transportation 116 (e.g., a freight truck) can be loaded by labor 109 and depart for its subsequent destination. The environment 100 is configured to optimize worker performance by selectively scheduling and assigning tasks and worker equipment, as discussed more particularly below. The term “worker” can be understood as a human, a non-human animal (e.g., a trained animal such as a dog) or any other asset that performs tasks at a job site (e.g., a robotic device).

FIG. 2A is a diagram of architecture associated with of a connected warehouse system 200 of this disclosure. System 200 can include enterprise performance management (EPM) control tower 210a-n, including components and databases such as but not limited to global operations, labor optimization, site operations, asset performance, and worker performance. System 200 can also include a networked warehouse system of record 220a-n, including components and databases such as but not limited to sites (e.g., locations, benchmarks, performance service level, etc.), labor (e.g., schedule, shifts, certification, skills, etc.), operations (e.g., plans, equipment, inventory type, throughput, etc.), assets (e.g., sortation, palletizers, robots, etc.), and workers (e.g., trends, profiles, task performance such as sorters, pickers, maintenance works, etc.). EPM control tower 210a-n and networked warehouse system of record 220a-n can reside in a cloud based computing system 242 (e.g., a cloud computing network, one or more remote servers) and be communicatively coupled to data transformation and integration layer 230.

System 242 may be communicatively coupled to an edge computing system 244. System 244 can be an edge computing system or node with a dedicated unit onsite at the work site (e.g., factory, distribution center, warehouse, etc.). System 244 can be configured to process data and information from labor database 238, asset control systems 236 (e.g., components related to control of robots, material handling, etc.) and worker tasks database 232. Database 238 can include databases for warehouse management services (WMS) and warehouse execution systems (WES).

Database 232 can include one or more telemetry components operatively coupled to features of distribution center environment 100 so as to process and transmit control information generated subscribing to incoming control information for consumption by one or more controllers of system 240 over a network. Database 232 can be configured for data validation and modification for incoming telemetry or attributes before saving to the database; copy telemetry or attributes from devices to related assets so you can aggregate telemetry, e.g., data from multiple subsystems can be aggregated in related asset; create/update/clear alarms based on defined conditions; trigger actions based on edge life-cycle events, e.g., create alerts if device is online/offline; load additional data required for processing, e.g., load threshold value for a device that is defined in a user, device, and/or employee attribute; raise alarms/alerts when complex event occurs and use attributes of other entities inside email template; and/or consider user preferences during event processing. In some aspects, messages transmitted from database 232, such as triggers and/or alerts, can be configured for transmitting information to an end user (e.g., site lead, crew in the control tower, etc.) for optimization purposes. System 200 can also be configured to detect near accidents or other misses to build a trend model for early detection of anomalies before faults or malfunctions occur increasing safety. In some aspects, the trend model can perform statistical analysis of worker trends including assigned tasks, event datasets to derive insights on worker performance considering the nature of work, skillset, criticality, labor intensity, etc. In some aspects, the trend model can classify data on a variety of key performance parameters to generate reports, dashboards, and insights that can be presented to users. In some aspects, the trend model can determine benchmarks based on statistics for type of task, skill set, geographical location, industry etc. to enable performance-based assessment, incentives and target setting for worker operations.

Database 232 can include mobile warehouse solutions focused on picking, sorting, and other such tasks. Database 232 can include maintenance and inspection components configured to provide one or more checklists with standard operating procedures (SOPs), maintenance processes, and the like. Database 232 can include guided work and voice maintenance and inspection components configured where hands-free work is required by employees to complete a task.

FIG. 3 is a flowchart illustrating a method 300 for optimizing operations of a job site. In step 310, the method can include providing visibility into real-time workforce productivity before an issue occurs. In step 320, the method can include viewing worker productivity by location across functional areas. In step 330, the method can include providing worker recommendations to return to a worker plan. In step 340, the method can include providing tools to reallocate workers, assignment tasks, react to unplanned events or disturbances in an event resolution or disruption mitigation plan. It is in this step that constraint modeling and task prediction is practiced as described in this disclosure. In step 350, the method can include measuring the impact of changes to make persistent improvement via a learning model and event or disturbance log and trend to an optimized job site.

Although FIG. 3 shows example blocks of exemplary method 300, in some implementations, the exemplary method 300 may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted in FIG. 3. Additionally, or alternatively, two or more of the blocks of the exemplary method 300 may be performed in parallel.

FIG. 4 is an exemplary user interface of a task overview dashboard 400. Optimizing task management begins with task prediction and an initial allocation of time and resources predicted to be most efficient absent constraints. The task overview dashboard includes operations data 410 for a facility, e.g., a warehouse, such as total tasks scheduled, high priority tasks, tasks to be performed, tasks completed, the resources required to complete each task, workers allocated to each task, and workers with idle time or availability to be allocated to a task. Furthermore, the task overview dashboard includes a sidebar 420 with a set of reports of incidents and events that may constrain previously scheduled tasks or affect the scheduling of upcoming tasks. Furthermore, telemetry data and other inputs may be used to provide more information about the operations data for the facility as described in FIG. 1 and in FIGS. 9-11 below.

The proposed task prediction and constraint modeling system includes an optimization model that provides advanced insights into task management. It predicts the optimum way to execute a task and also how to manage the task with known constraints. From the model, each task can be predicted for the earliest time by which it can be completed, the most cost-effective way to complete the task (with reduced resources and equipment), the latest the task can be delayed without impacting the schedule of overall operations. The proposed solution can update the predicted time, resource needs, and delayed start time based on the constraints introduced. For example, if a constraint is introduced that requires the task to be completed 30 minutes earlier than previously anticipated, it can predict and indicate the additional resources or equipment required to complete the task earlier than planned. This ability to be able to predict the task accurately with respect to time and resources allows manual as well as automated scheduling systems to efficiently manage day-to-day operations.

FIG. 5 is an exemplary task estimate interface 500. For each task in the task overview dashboard, an estimate is provided by an optimization model as to the time required, resources required, equipment, and material. The model obtains the set of operations data for the facility, which includes the plurality of tasks to be completed, and a plurality of operational constraints, such as a trucking shortage or an incident on a warehouse floor, as described in FIG. 4. The system then generates the optimization model which calculates an original estimate, and a set of estimates that are directed to optimize a given resource or to optimize time.

As shown in FIG. 5, exemplary task 1 is estimated to be accomplished in five time periods T0-T4. Each time period may be set to be, e.g., 30 minutes, such that the original estimate indicates that task 1 will be accomplished in 150 minutes. In the first 30 minute interval T0, task 1 is estimated to require six units of a resource, then only two of the resource for the next 60 minutes, three for T3, and one for the last 30 minutes at T4. For simplicity in illustration, the task requires a single resource, but the system and method can likewise be applied to a multi-resource task. The original estimate also allocates two forklifts to the task at T1, and one means of transport, such as a truck, to the task at T3. Furthermore, the original estimate indicates that PPE will be required for the first 30 minutes of the task and packaging for the last 30 minutes. This original estimate is inserted into the task overview list. Any alterations in parameters or variables to the optimization model above however may alter the estimate. The optimization model may be altered by a user via a worker computing device with an input/output interface, an example of which is shown in FIG. 7, or by a trained machine learning model built into the optimization model itself. Once the parameters or variables are changed, a new set of solutions are offered that incorporate the constraints. The set of solutions may include a new subset of solutions that comprise a time efficient estimate 600A, and a resource efficient estimate 600B. These estimates are only exemplary, as the optimization model may output predictions optimizing a number of other factors, such as due date, total time to be allocated, number of workers, materials, energy use, etc.

FIG. 6A is a first exemplary optimized task estimate interface showing the time efficient estimate 600A. After the parameters have been changed by a user or a machine learning model, the optimization model outputs a time efficient model that indicates that the task can be completed 30 minutes earlier, but would require four more resources than the original estimate shown in FIG. 5. Furthermore, the means of transport will be required 30 minutes earlier than in the original estimate. FIG. 6B is a second exemplary optimized task estimate interface showing a resource efficient estimate 600B. This estimate indicates to conserve resource use would require a 1 hour delay. These estimates may be passed on to a site leader or supervisor for determination of the task schedule. Furthermore, if the supervisor is not satisfied with any of the estimates, then they may continue altering the parameters to generate new sets and subsets of estimates until a satisfactory allocation of time and resources is achieved in view of the constraints required. Furthermore, a supervisor may input hypothetical constraints and parameters to anticipate potential or expected constraints that have not yet come to pass. This is done using an input/output interface as part of a computing device connected to the system.

FIG. 7 depicts an example user interface 710 for of an example computing device 722. As seen, via user interface 710 one or more tasks can be assigned, created, and/or otherwise communicated to one or more users (e.g., crew member). Such notifications related to a newly assigned task or feedback related to an already-assigned task can include information controls for users to accept, snooze, and/or otherwise interact with a respective task (e.g., propose or execute modifications to a task, work plan, and/or the like). Specifically, user interface 710 can be used to generate real-time task instructions for employees (e.g., crew members) or any related user based on operations feedback, including human and analytics feedback related to one or more work sites. As can be seen, interface 710 can include automatically and/or manually generating tasks with task-related information, such as a template(s) for task creation, a work site location (e.g., zone, 1, zone 2, etc.), a worker pulldown menu (e.g., team 1, team 2, individual 1, individual 2, etc.), and a priority pulldown menu (e.g., move to top, objective categorizing of a task such as urgent, non-urgent, etc.). In some aspects, user interface 710 can be used to oversee worker execution of a work-related plan (e.g., daily plan, a weekly plan, a monthly plan, a quarterly plan, etc.) so as to encourage and remain present to advise and address issues that prevent employees from completing tasks. In some aspects, user interface 710 is used to optimize workplace performance by automatically assigning and/or scheduling the appropriate tasks for the appropriate employee at the appropriate time (e.g., based on one or more relationships determined as between detected criteria such as employee skills, availability, experience, history, and/or the like).

FIG. 8 is a flowchart illustrating a method 800 for managing unplanned tasks that may be added via a task creation template (e.g., tasks of job site(s), area(s) of job site(s), employee(s), group(s) of employees, etc.). In step 810, the method can include viewing, by employee user (e.g., a ramp agent) a list of tasks for a shift (e.g., an upcoming shift). In step 820, the method can include presenting an assigned first task to the user, the assigned task being unexpected (e.g., a tug operator employee can be inspecting a tug and then receive a first task). In step 830, the method can include the employee completing a first subtask (e.g., arriving to a job site associated with the assigned task) and updating status of the assigned task based on a status of the first subtask (e.g., the employee has arrived to the job site). In some aspects, the tug operator employee can arrive to an airplane (e.g., the job site) and the status of the first subtask can be that the tug operator employee has arrived to the airplane. The status can be automatically updated and/or communicated based on information the employee detected or tracked from the computing device of the employee (e.g., GPS data automatically transmitted from a location tracker of the computing device of the employee). In some aspects, the status can be manually updated and/or communicated (e.g., the employee can manually enter into a computing device that she has arrived to the job site).

In step 840, the method can include the employee completing a second subtask (e.g., arriving to a second job site associated with the assigned task) and updating status of the assigned task based on a status of the second subtask (e.g., the employee has arrived to the second job site to sort). In some aspects, the tug operator employ can return with a load from the first job site and the status of the second subtask can be that the tug operator employee has returned from the airplane with the load for sorting or that that the load has already been sorted. The status of the second subtask can be automatically updated and/or communicated based on data of the computing device of the employee and/or any items associated with the second subtask (e.g., GPS data automatically transmitted from the computing device of the employee, tracking information of any items associated with the second subtask, etc.). In some aspects, the status can be manually updated and/or communicated (e.g., the employee can manually enter into a computing device that she has returned, that the load has been sorted, etc.). In some aspects, task updates can be semi-automated and/or automated based on input from one or more feedback mechanisms such as voice input, scanning, device usage, network activity, location-based events, visual recognition events, etc.

In some aspects, completion of the first and second subtasks can automatically mark the assigned task as being completed. In this respect, in step 850, the method can include upon completion of the first assigned task, automatically assigning a second assigned task to the employee (e.g., the tug operator employee receives a new task since the aforementioned load has been retrieved from the airplane, sorted, and returned).

In step 860, the method can include viewing, by a second employee (e.g., an employee other than the tug operator such as a ramp agent), a real-time status of all other employees of a team associated with the first employee (e.g., other tug operators of the first tug operator's team).

In step 870, the method can include reviewing, by a third employee (e.g., an employee who is a manager or OPS lead other than the tug operators), a real-time status of all task operations of the job site and employee task performance metrics.

FIG. 9 is a diagram of architecture associated with of a connected warehouse system 900 of this disclosure. System 900 can include workforce analytic modules 915, including but not limited to modules for dynamic work allocation, real-time worker performance metrics, worker satisfaction, etc. Workforce analytic modules 915 can also include one or more worker performance dashboards 923 and improvement recommendations 925. Improvement recommendations 925 can be for training, rewarding, coaching, engagement, etc. opportunities to maximize worker retention, performance, and overall work operations

In certain aspects, worker performance dashboards 923 and improvement recommendations 925 can be updated (e.g., in real-time) by a system 917 of record for worker activities and performance. System 917 can be in communication with workforce analytic modules 915. System 917 can improve schedule worked productivity via labor management module 910 and planning systems module 920. Specifically, management module 910 can include one or more discrete components (e.g., components to manage manufacturing operations management (MOM) labor, 3rd party activities, as well as homegrown activities) that in real-time communicate with a comprehensive data model of system 917. The comprehensive data model of system 917 can include a plan performance module bi-directionally coupled to labor management module 910. The comprehensive data model of system 917 can also include modules with digital task performance and task-level granularity. In some aspects, the plan performance module can include a database of worker digital task performance and task-level granularity (e.g., showing discrete subtasks of a task or granular performance metrics of a respective worker task).

In practice, a layer 926 for identifying and reporting adverse conditions can be included in system 917. Layer 926 can include an asset performance manager (APM) as well as systems to manage worker orders. In some aspects, layer 926 can include an operation intel manager and trouble-found reporting system that collectively work to enable layer 926 to communicate with aspects of assignment layer 924 downstream thereof. Layer 926 can include a plan system in bi-directionally coupled to planning systems module 920, including but not limited to warehouse management systems (WMS), third party systems, and the like. The operation intel manager and trouble-found of assignment layer 926 can communicate with digital task creation and digital task assignment systems of assignment layer 924. Assignment layer 924 in turn can communicate with aspects of execution layer 922 downstream thereof.

Layer 922 can include or be coupled to one or more mobile devices (e.g., mobile devices of users and/or personnel associated therewith including employees, managers, and personnel of third parties). Layer 922 can also include guided work software (GWS) systems. In some aspects, the digital task creation and digital task assignment systems of assignment layer 924 can be in communication with the mobile devices of layer 922 as well as a digital task execution system of layer 922. In some examples, mobile devices of layer 922 as well as a digital task execution system of layer 922 can communicate with the task level granularity system, the plan performance system, and digital task performance system of the comprehensive data model of system 917 to dynamically update worker performance dashboard 923 and improvement recommendations 925.

FIG. 10 is a diagram of architecture of a connected warehouse system 1000 of this disclosure. System 1000 can be a multi-layered system including an applications layer 1010, a platform services layer 1020, a common services layer 1052a-n, a standards and processes layer 1054a-n, a connectivity services layer 1040, a data sources layer 1048a-n, and an enterprise systems layer 1050a-n.

Applications layer 1010 can include a plurality of components such as applications for portfolio operations, site operations, asset performance management, predictive asset maintenance, asset health management, asset maintenance optimization, downtime reporter, instrument asset management, vertical specific extension, and worker performance.

Platform services layer 1020 can be in communication with applications layer 1010 and include a plurality of system components, including domain services 1022a-n, application services 1024a-n, data services 1026a-n, managed storage 1028a-n, and data ingestion 1030a-n. Domain services 1022a-n can include modules and/or components for asset model service, asset digital service, asset key performance indicator (KPI) service, event management service, asset data service, asset annotation service, downtime management service, asset analytics service, task/activity service, and people worker service. Preferably, domain services 1022a-n includes asset analytics service systems, task/activity service systems, and people worker service systems.

Application services 1024a-n can include modules and/or components for portal navigation service, dashboard builder, report writer, content search, analytics workbench, notification service, execution scheduler, event processing, rules engine, business workflow services, analytics model services, and location services. Some or all of components of application services 1024a-n can be in communication with applications of layer 1010.

Data services 1026a-n can include modules and/or components for time series, events, activities and states, configuration model, knowledge graph, data search, data dictionary, application settings, and personal identifying information (PII) services. Managed storage services 1028a-n can include databases for time series, relational, document, blob storage, graph databases, file systems, real-time analytics databases, batch analytics databases, and data caches. Managed storage services 1030a-n can include modules and/or components for device registration, device management, telemetry, command and control, data pipeline, file upload/download, data prep, messaging, and IoT V3 connector.

Connectivity services layer 1040 can include edge services 1042a-n, edge connectors 1044a-n, and enterprise integration 1046a-n. Edge services 1042a-n can include modules and/or components for connection management, device management, edge analytics, and execution runtime. Edge connectors 1044a-n can include OPC unified architecture (OPC UA), file collectors, and domain connectors. Enterprise integration 1046a-n can include modules and/or components for streaming, events, and/or files. Data sources layer 1048a-n can include modules and/or components for streaming, events, and/or files, as well as time series.

In some aspects, common services 1052a-n can include one or more API gateways as well as components for logging and monitoring, application hosting, identify management, access management, tenant management, entitlements catalogues, licensing, metering, subscription billing, user profiles, and/or secret store.

In some aspects, standards and processes 1054a-n can include one or more UX libraries as well as components for cybersecurity, IP protection, data governance, usage analytics, tenant provisioning, localization, app lifecycle management, deployment models, mobile app development, and/or marketplace.

FIG. 11 depicts a schematic block diagram of a framework of a platform of a connected warehouse system 1100. System 1100 can include an asset management system 1110, operations management system 1112, worker insights and task management system 1114, and configuration builder system 1116. Each of systems 1110, 1112, 1114, and 1116 can be in communication with API 1120, whereby API 1120 can be configured to read/write tasks, events, and otherwise coordinate working with workers of system 1100. API 1120 can include a task monitoring engine configured to track status, schedule, and facilitate task creation. API 1120 can present or otherwise be accessed via a worker mobile application (e.g., a graphical user interview on a computing device) to similarly present and manage operations related to tasks, events, and asset information.

API 1120 can be communication with model store 1126 whereby model store 1126 can include models such as worker models, asset models, operational models, task models, event models, workflow models, and the like. API 1120 can be communication with time series databases 1124a-n and transaction databases 1122a-n. Time series databases 1124a-n can include knowledge databases, graph databases, as well as extensible object models (EOMs). Transaction databases 1122a-n can include components and/or modules for work orders, labor, training data, prediction results, events, fault, costs, reasons, status, tasks, events, and reasons.

Each of databases 1124a-n, 1122a-n can be in communication with analytics model 1134, which can be a machine learning model to effectively process, analyze, and classify operations of system 1100. Model 1134 can be a trained machine learning system having been trained using a learned set of parameters to predict one or more learned performance parameters of system 1100. Learned parameters can include but are not limited to predictive asset maintenance of a connected warehouse, asset health management, asset maintenance optimization, worker downtime reporter, instrument asset management, vertical specific extension, and worker performance. One or more corrective actions can be taken in response to predictions rendered by model 1134. Model 1134 can be trained with a regression loss (e.g., mean squared error loss, Huber loss, etc.) and for binary index values it may be trained with a classification loss (e.g., hinge, log loss, etc.). Machine learning systems that may be trained include, but are not limited to convolutional neural network (CNN) trained directly with the appropriate loss function, CNN with layers with the appropriate loss function, capsule network with the appropriate loss function, Transformer network with the appropriate loss function, Multiple instance learning with a CNN (for a binary resistance index value), multiple instance regression with a CNN (for a continuous resistance index value), etc.

In certain aspects, databases 1124a-n and 1122a-n can operate together to perform exception event detection 1128. Exception event detection 1128 can utilize data from one or more data sources to detect low limit violations, fault symptoms, KPI target deviations, etc. In certain aspects of exception event detection 1128, a data ingestion pipeline 1136 and enterprise integration framework 1138 can exchange information for energy and emission calculations per asset/units of system 1100. Pipeline 1136 can utilize contextual data and data preprocessing while framework 1138 can include extensible integration service with standard and customer connectors.

In certain aspects, an IoT gateway 1140 can be communicatively coupled to pipeline 1136. IoT gateway 1140 can be communicatively coupled to IoT devices 1154 such as sensors 1158a-n, including leak detection sensors, vibration sensors, process sensors, and/or the like. IoT gateway 1140 can also be in communication with data historian 1156 including historical data related to the warehouse.

Framework 1138 can be in communication with event manager modules 1142a-n, including workflow module, work order integration module, worker performance module, asset event module, and the like. For events, the workflow module can be configured to bidirectionally communicate with framework 1138 and components of process workflow data 1152a-n, including Process Safety Suite (PSS) maintenance and inspection (M&I) and PSS GWS. For event streaming, work order integration module and worker performance module can both be configured to bidirectionally communicate with framework 1138 and labor management systems (LMS) 1150. In some aspects, for event streaming asset event module can also be configured to bidirectionally communicate with PSS operational intelligence systems 1146 and framework 1138. PSS operational intelligence systems 1146 in turn can be cloud-based and/or on premises and be in bidirectional communication with devices 1148a-n, including voice devices, mobility devices, hand-held devices, printers, scanners, and/or the like. Framework 1138 can also be in communication with start talk module 1144 for corresponding API and event control.

In aspects of system 1100, pipeline 1136 and framework 1138 work together to perform step 1132 to calculate energy and emission calculations for assets and/or associated units. Model 1134 can be used in performing step 1132 as well as other native and/or external models connected therewith, whereby step 1132 can utilize data received from pipeline 1136 and framework 1138.

Upon completing step 1132, key performance monitoring calculations can be performed in step 1130. Step 1130 can be performed based on energy and emission calculations from step 1132 by aggregating and rollup across one or multiple reporting periods. Upon performing step 1130, the aforementioned event exception detection step 1128 can be performed to detect exception events. In some aspects, step 1128 can be performed based on the key performance monitoring calculations of step 1130.

FIG. 12A is a diagram of data flow 1200 of a connected warehouse system, including one with connective workers and performance management (EPM) service systems. In step 1204, an operator and/or engineer may use a computing device 1206 to manage system performance through a user interface (e.g., a web-based or browser-based application) using system gateway 1210, which can be a cloud based. In step 1202, a user (e.g., worker, manager, and/or the like) may use an app in a computing device 1208 (e.g., mobile device such as a tablet or smart phone or any personal computing device) via an API to communicate and exchange data with gateway 1210.

Warehouse system services 1212a-n can be configured in communication with gateway 1210 (e.g., receive data from gateway 1210 from steps 1202 and 1204). Services 1212a-n can be configurable to communicate and/or update in real-time functions such as identify and access management (IAM), system extensible object model (EOM), notifications, fire and gas instrumented function (FIF), etc. Performance management system 1214a-n can be configured to transmit data to warehouse system services 1212a-n while receiving data from LMS 1216. Based on said data from LMS 1216, real-time adjustments can be determined for a labor management plan associated with the warehouse and/or workers. In some aspects, the labor management plan can be updated by system 1214a-n being in bidirectional communication with gateway 1210. System 1214a-n can include or otherwise be in communication with corresponding web apps, asset performance management (APM) services, connected worker services, LMS integration applications, site operation services, and global operation services. System 1214a-n can be connected to one or more cloud-based databases (e.g., SQL DB 1216). One or more components of system 1214a-n can be part of computing devices and/or sensors associated with workers connected to the system.

LMS 1216 can be configured to control labor costs, track performance, and predict one or more parameters associated with performance (e.g., project fulfillment execution) and transmit and/or otherwise present such information in LMS system integration applications (e.g., using FIF). In turn, system 1214a-n can configured to consume data from LMS 1216, gateway 1210, devices 1208 and 1206, and services 1212a-n to deliver one or more inferences to end users (e.g., one or more actions that the end-user can take or a corresponding employee or employees associated with one or more tasks) to result in changing a warehouse operation, such as warehouse operation savings. Warehouse operation savings can be directed towards safety, maintenance, performance, resource conservation, deliverable management, inventory management, etc.). An actionable update (e.g., a sync) may then be made to data flow 1200.

FIG. 12B is a diagram of data flow 1200′ of a connected warehouse system. In addition to previous steps 1202 and 1204, data flow 1200′ provides step 1201 in which a system administrator and/or application engineer may manage system performance through a user interface (e.g., a web-based or browser-based application) using system gateway 1210, which can be a cloud based. In data flow 1200′, one or more services of services 1212a-n (e.g., such as the notifications module) can push messages or otherwise push notify (e.g., notification via webhook) from services 1212a-n to device 1208. In some aspects, data flow 1200′ provides that performance management system 1214a-n can receive data from LMS 1216 and one or more third party systems 1217. Based on said data from LMS 1216 and one or more third party systems 1217, real-time adjustments can be determined for a labor management plan associated with the warehouse and/or workers. In some aspects of data flow 1200′, the labor management plan can be updated by system 1214a-n being in bidirectional communication with gateway 1210.

Aspects of FIGS. 1-12B are advantageous for measuring Worker assignment/task progress in contextually relevant dimensions, visualize in real-time, and alert users (e.g., supervisor(s) and/or stakeholder(s)) upon identified anomlous trend deviations from rates of worker KPIs.

Various embodiments of the present disclosure (e.g., edge systems, gateway systems, operations centers, remote systems, warehouse systems, connected worker systems, etc.), as described above with reference to FIGS. 1-12B may be implemented using device 1300 in FIG. 13. After reading this description, it will become apparent to a person skilled in the relevant art how to implement embodiments of the present disclosure using other computer systems and/or computer architectures. Although operations may be described as a sequential process, some of the operations may in fact be performed in parallel, concurrently, and/or in a distributed environment, and with program code stored locally or remotely for access by single or multi-processor machines. In addition, in some embodiments the order of operations may be rearranged without departing from the spirit of the disclosed subject matter.

As shown in FIG. 13, device 1300 may include a central processing unit (CPU) 1320. CPU 1320 may be any type of processor device including, for example, any type of special purpose or a general purpose microprocessor device. As will be appreciated by persons skilled in the relevant art, CPU 1320 also may be a single processor in a multi-core/multiprocessor system, such system operating alone, or in a cluster of computing devices operating in a cluster or server farm. CPU 1320 may be connected to a data communication infrastructure 1310, for example, a bus, message queue, network, or multi-core message-passing scheme.

Device 1300 may also include a main memory 1340, for example, random access memory (RAM), and may also include a secondary memory 1330. Secondary memory 1330, e.g., a read-only memory (ROM), may be, for example, a hard disk drive or a removable storage drive. Such a removable storage drive may comprise, for example, a floppy disk drive, a magnetic tape drive, an optical disk drive, a flash memory, or the like. The removable storage drive in this example reads from and/or writes to a removable storage unit in a well-known manner. The removable storage unit may comprise a floppy disk, magnetic tape, optical disk, etc., which is read by and written to by the removable storage drive. As will be appreciated by persons skilled in the relevant art, such a removable storage unit generally includes a computer usable storage medium having stored therein computer software and/or data.

In alternative implementations, secondary memory 1330 may include other similar means for allowing computer programs or other instructions to be loaded into device 1300. Examples of such means may include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM, or PROM) and associated socket, and other removable storage units and interfaces, which allow software and data to be transferred from a removable storage unit to device 1300.

Device 1300 may also include a communications interface (“COM”) 1360. Communications interface 1360 allows software and data to be transferred between device 1300 and external devices. Communications interface 1360 may include a modem, a network interface (such as an Ethernet card), a communications port, a PCMCIA slot and card, or the like. Software and data transferred via communications interface 1360 may be in the form of signals, which may be electronic, electromagnetic, optical, or other signals capable of being received by communications interface 1360. These signals may be provided to communications interface 1360 via a communications path of device 1300, which may be implemented using, for example, wire or cable, fiber optics, a phone line, a cellular phone link, an RF link or other communications channels.

The hardware elements, operating systems and programming languages of such equipment are conventional in nature, and it is presumed that those skilled in the art are adequately familiar therewith. Device 1300 also may include input and output ports 1350 to connect with input and output devices such as keyboards, mice, touchscreens, monitors, displays, etc. Of course, the various server functions may be implemented in a distributed fashion on a number of similar platforms, to distribute the processing load. Alternatively, the servers may be implemented by appropriate programming of one computer hardware platform.

The systems and methods of this disclosure can be cloud-based, multi-tenant solutions configured to deliver optimized work instructions tailored for specific vertical workflows utilizing an easy to deploy, scalable, and configurable data model and software suite to deliver performance insights and improve worker productivity.

The disclosure provides one or more user interface systems for smart worker performance scoring and evaluation of a job site (e.g., one or more warehouses), whereby information from sensors and/or connected worker computing devices may provide dynamic data about job performance (e.g., productivity of worker(s), task productivity, production productivity, etc.), a processor and database(s) for receiving and processing the dynamic data, and having a program that aggregates and analyzes the dynamic data for one or more categories of the one or more worker performance. The data analysis may determine performance scores for each of the one or more performance categories, and calculate an overall worker performance score. The worker performance score for each category of this disclosure may be displayed on a dashboard and/or related scorecards. In some aspects, one or more functions are used to calculate scores (e.g., assigning a coefficient factor to values of categories such as time on task, time between tasks, number of tasks completed, idle state, etc.). The coefficient factor may be determined from a comparison value based on some predetermined standard and/or worker performance historical data of the one or more categories. Any of the herein disclosed dashboards and related user interfaces may present worker performance scores and related details of the dynamic data for detecting and solving worker performance issues (e.g., recommended corrective actions) without changing the dashboard or the monitor.

The worker performance scores of this disclosure can include numerous scores and sub-scores, including performance scores, environmental scores related to the job site and/or areas of a job site (e.g., utility consumption, carbon footprint, emissions, etc.), health scores, safety scores, maintenance scores, job site asset scores, happiness scores, etc. Such scores are also advantageous for use in using trained machine learning models to predict performance impacts depending on trends of all such scores of this disclosure.

Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.

Claims

1. A computer-implemented method of task scheduling and process control, comprising:

obtaining, by a system comprising at least one processor, a set of operations data for a facility, the set of operations data including a plurality of tasks scheduled to be completed, and a plurality of operational constraints;
generating, by the system, an optimization model based on the set of operations data, wherein the optimization model defines a plurality of variables corresponding to operations of the facility and the plurality of operational constraints;
calculating, by the system using the optimization model, a set of solution values, wherein each of the set of solution values corresponds to one or more of the plurality of variables; and
transmitting the set of solution values to a scheduling device configured to generate a task schedule for the facility.

2. The method of claim 1, wherein the system includes a user input/output interface, and

the obtaining, by a system comprising at least one processor, a set of operations data for a facility comprises a user inputting one or more of the plurality of operational constraints via the user input/output interface;
the generating, by the system, an optimization model is based at least in part on the one more of the plurality of operational constraints input by the user; and
the calculating, by the system using the optimization model, a set of solution values includes a first subset of solution values for a first operational constraint input by the user, and a second subset of solution values for a second operational constraint input by the user.

3. The method of claim 2, wherein the task schedule generated by the scheduling device is displayed on the user input/output interface.

4. The method of claim 2, wherein the operational constraints comprise time constraints, resource constraints, and labor constraints, wherein:

the time constraints include: a due date for a task and a time duration to complete the task;
the resource constraints include equipment availability and material availability; and
the labor constraints include availability of first workers with a first skillset and second workers with a second skillset.

5. The method of claim 4, wherein altering the value of one of the operational constraints alters the optimization model and an altered set of solution values is calculated.

6. The method of claim 1, wherein the system includes a machine learning model, and

the obtaining, by a system comprising at least one processor, a set of operations data for a facility comprises the machine learning model generating one or more of the plurality of operational constraints based on historical models;
the generating, by the system, an optimization model is based at least in part on the one more of the plurality of operational constraints generated by the machine learning model; and
the calculating, by the system using the optimization model, a set of solution values includes a first subset of solution values for a first operational constraint generated by the machine learning model, and a second subset of solution values for a second operational constraint input by the machine learning model, wherein the operational constraints comprise time constraints, resource constraints, and labor constraints, and wherein:
the time constraints include: a due date for a task and a time duration to complete the task;
the resource constraints include equipment availability and material availability; and
the labor constraints include availability of first workers with a first skillset and second workers with a second skillset.

7. The method of claim 6, wherein altering the value of one of the operational constraints alters the optimization model and an altered set of solution values is calculated.

8. A non-transitory computer readable medium storing instructions that, when executed by one or more processors, cause the one or more processors to perform a method of task scheduling and process control, comprising:

obtaining a set of operations data for a facility, the set of operations data including a plurality of tasks scheduled to be completed, and a plurality of operational constraints;
generating an optimization model based on the set of operations data, wherein the optimization model defines a plurality of variables corresponding to operations of the facility and the plurality of operational constraints;
calculating, using the optimization model, a set of solution values, wherein each of the set of solution values corresponds to one or more of the plurality of variables; and
transmitting the set of solution values to a scheduling device configured to generate a task schedule for the facility.

9. The non-transitory computer readable of claim 8, further including a user input/output interface, and

the obtaining a set of operations data for a facility comprises a user inputting one or more of the plurality of operational constraints via the user input/output interface;
the generating an optimization model is based at least in part on the one more of the plurality of operational constraints input by the user; and
the calculating, using the optimization model, a set of solution values includes a first subset of solution values for a first operational constraint input by the user, and a second subset of solution values for a second operational constraint input by the user.

10. The non-transitory computer readable of claim 9, wherein the task schedule generated by the scheduling device is displayed on the user input/output interface.

11. The non-transitory computer readable of claim 9, wherein the operational constraints comprise time constraints, resource constraints, and labor constraints, wherein:

the time constraints include: a due date for a task and a time duration to complete the task;
the resource constraints include equipment availability and material availability; and
the labor constraints include availability of first workers with a first skillset and second workers with a second skillset.

12. The non-transitory computer readable of claim 11, wherein altering the value of one of the operational constraints alters the optimization model and an altered set of solution values is calculated.

13. The non-transitory computer readable of claim 8, further including a machine learning model, and

the obtaining a set of operations data for a facility comprises the machine learning model generating one or more of the plurality of operational constraints based on historical models;
the generating an optimization model is based at least in part on the one more of the plurality of operational constraints generated by the machine learning model; and
the calculating, using the optimization model, a set of solution values includes a first subset of solution values for a first operational constraint generated by the machine learning model, and a second subset of solution values for a second operational constraint input by the machine learning model, wherein the operational constraints comprise time constraints, resource constraints, and labor constraints, and wherein:
the time constraints include: a due date for a task and a time duration to complete the task;
the resource constraints include equipment availability and material availability; and
the labor constraints include availability of first workers with a first skillset and second workers with a second skillset.

14. The non-transitory computer readable of claim 13, wherein altering the value of one of the operational constraints alters the optimization model and an altered set of solution values is calculated.

15. A system for task scheduling and process control, comprising a processor configured to:

obtain a set of operations data for a facility, the set of operations data including a plurality of tasks scheduled to be completed, and a plurality of operational constraints;
generate an optimization model based on the set of operations data, wherein the optimization model defines a plurality of variables corresponding to operations of the facility and the plurality of operational constraints;
calculate, using the optimization model, a set of solution values, wherein each of the set of solution values corresponds to one or more of the plurality of variables; and
transmit the set of solution values to a scheduling device configured to generate a task schedule for the facility.

16. The system of claim 15, further comprising a user input/output interface, and wherein:

the obtaining a set of operations data for a facility comprises a user inputting one or more of the plurality of operational constraints via the user input/output interface;
the generating an optimization model is based at least in part on the one more of the plurality of operational constraints input by the user; and
the calculating, using the optimization model, a set of solution values includes a first subset of solution values for a first operational constraint input by the user, and a second subset of solution values for a second operational constraint input by the user.

17. The system of claim 16, wherein the task schedule generated by the scheduling device is displayed on the user input/output interface.

18. The system of claim 16, wherein the operational constraints comprise time constraints, resource constraints, and labor constraints, wherein:

the time constraints include: a due date for a task and a time duration to complete the task;
the resource constraints include equipment availability and material availability; and
the labor constraints include availability of first workers with a first skillset and second workers with a second skillset.

19. The system of claim 18, wherein altering the value of one of the operational constraints alters the optimization model and an altered set of solution values is calculated.

20. The system of claim 15, wherein the system includes a machine learning model, and

the obtaining a set of operations data for a facility comprises the machine learning model generating one or more of the plurality of operational constraints based on historical models;
the generating an optimization model is based at least in part on the one more of the plurality of operational constraints generated by the machine learning model; and
the calculating, using the optimization model, a set of solution values includes a first subset of solution values for a first operational constraint generated by the machine learning model, and a second subset of solution values for a second operational constraint input by the machine learning model, wherein the operational constraints comprise time constraints, resource constraints, and labor constraints, and wherein:
the time constraints include: a due date for a task and a time duration to complete the task;
the resource constraints include equipment availability and material availability; and
the labor constraints include availability of first workers with a first skillset and second workers with a second skillset.
Patent History
Publication number: 20240112108
Type: Application
Filed: Mar 3, 2023
Publication Date: Apr 4, 2024
Inventors: Kalimulla KHAN (Morris Plains, NJ), Srihari JAYATHIRTHA (Bangalore), Wade LINDSEY (Canton, GA), Venkata Pradeep KOLLA (Cumming, GA)
Application Number: 18/178,465
Classifications
International Classification: G06Q 10/0631 (20060101);