Patents by Inventor Eric J. Hartman
Eric J. Hartman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11951509Abstract: An applicator head for a vacuum coating system includes a manifold shell having opposing shell plates, each including a conduit attachment coupled to a shell aperture. An applicator manifold is affixed to each shell plate. Each applicator manifold includes two coupled manifold plates, with one including a manifold aperture, and each is affixed to the respective shell plate so that each manifold aperture aligns with the respective shell aperture. An applicator channel is formed between the manifold plates of each applicator manifold, and the applicator channel is fluidically coupled to the manifold aperture of each respective applicator manifold. Each applicator channel forms an applicator port at a leading edge of each respective applicator manifold, and each leading edge is configured to be complementary in shape to an edge of a workpiece to be coated. First and second face plates are disposed over the leading edges of the applicator manifolds.Type: GrantFiled: August 26, 2022Date of Patent: April 9, 2024Assignee: AWI Licensing LLCInventors: Sebastien G. Nalin, Scott L. Huntzinger, John J. Hartman, Jr., Lida Lu, Eric D. Kragness
-
Patent number: 7599897Abstract: System and method for training a support vector machine (SVM) with process constraints. A model (primal or dual formulation) implemented with an SVM and representing a plant or process with one or more known attributes is provided. One or more process constraints that correspond to the one or more known attributes are specified, and the model trained subject to the one or more process constraints. The model includes one or more inputs and one or more outputs, as well as one or more gains, each a respective partial derivative of an output with respect to a respective input. The process constraints may include any of: one or more gain constraints, each corresponding to a respective gain; one or more Nth order gain constraints; one or more input constraints; and/or one or more output constraints. The trained model may then be used to control or manage the plant or process.Type: GrantFiled: May 5, 2006Date of Patent: October 6, 2009Assignee: Rockwell Automation Technologies, Inc.Inventors: Eric J. Hartman, Carl A. Schweiger, Bijan Sayyarrodsari, W. Douglas Johnson
-
Patent number: 6944616Abstract: A system and method for historical database training of a support vector machine (SVM). The SVM is trained with training sets from a stream of process data. The system detects availability of new training data, and constructs a training set from the corresponding input data. Over time, many training sets are presented to the SVM. When multiple presentations are needed to effectively train the SVM, a buffer of training sets is filled and updated as new training data becomes available. Once the buffer is full, a new training set bumps the oldest training set from the buffer. The training sets are presented one or more times each time a new training set is constructed. A historical database of time-stamped data may be used to construct training sets for the SVM. The SVM may be trained retrospectively by searching the historical database and constructing training sets based on the time-stamped data.Type: GrantFiled: November 28, 2001Date of Patent: September 13, 2005Assignee: Pavilion Technologies, Inc.Inventors: Ralph Bruce Ferguson, Eric J. Hartman, William Douglas Johnson, Eric S. Hurley
-
Patent number: 6879971Abstract: A method for determining an output value having a known relationship to an input value with a predicted value includes the step of first training a predictive model with at least one output for a given set of inputs that exist in a finite dataset. Data is then input to the predictive model that is within the set of given inputs. Thereafter, a prediction is made of an output from the predictive model that corresponds to the given input such that a predicted output value will be obtained which will have associated therewith the errors of the predictive model.Type: GrantFiled: June 5, 2001Date of Patent: April 12, 2005Assignee: Pavilion Technologies, Inc.Inventors: James D. Keeler, Eric J. Hartman, Devendra B. Godbole, Steve Piche, Laura Arbila, Joshua Ellinger, R. Bruce Ferguson, II, John Krauskop, Jill L. Kempf, Steven A. O'Hara, Audrey Strauss, Jitendra W. Telang
-
Patent number: 6243696Abstract: A method for building a model of a system includes first extracting data from a historical database (310). Once the data is extracted, a dataset is then created, which dataset involves the steps of preprocessing the data. This dataset is then utilized to build a model. The model is defined as a plurality of transforms which can be utilized to run an on-line model. This on-line model is interfaced with the historical database such that the variable names associated therewith can be downloaded to the historical database. This historical database can then be interfaced with a control system to either directly operate the plant or to provide an operator an interface to various predicted data about the plant. The building operation will create the transform list and then a configuration step is performed in order to configure the model to interface with the historical database. When the dataset was extracted, it is unknown whether the variables names are still valid.Type: GrantFiled: March 24, 1998Date of Patent: June 5, 2001Assignee: Pavilion Technologies, Inc.Inventors: James D. Keeler, Eric J. Hartman, Devendra B. Godbole, Steve Piche, Laura Arbila, Joshua Ellinger, R. Bruce Ferguson, II, John Krauskop, Jill L. Kempf, Steven A. O'Hara, Audrey Strauss, Jitendra W. Telang
-
Patent number: 6144952Abstract: A predictive network is disclosed for operating in a runtime mode and in a training mode. The network includes a preprocessor (34') for preprocessing input data in accordance with parameters stored in a storage device (14') for output as preprocessed data to a delay device (36'). The delay device (36') provides a predetermined amount of delay as defined by predetermined delay settings in a storage device (18). The delayed data is input to a system model (26') which is operable in a training mode or a runtime mode. In the training mode, training data is stored in a data file (10) and retrieved therefrom for preprocessing and delay and then input to the system model (26'). Model parameters are learned and then stored in the storage device (22). During the training mode, the preprocess parameters are defined and stored in a storage device (14) in a particular sequence and delay settings are determined in the storage device (18).Type: GrantFiled: June 11, 1999Date of Patent: November 7, 2000Inventors: James D. Keeler, Eric J. Hartman, Steven A. O'Hara, Jill L. Kempf, Devendra B. Godbole
-
Patent number: 6002839Abstract: A predictive network is disclosed for operating in a runtime mode and in a training mode. The network includes a preprocessor (34') for preprocessing input data in accordance with parameters stored in a storage device (14') for output as preprocessed data to a delay device (36'). The delay device (36') provides a predetermined amount of delay as defined by predetermined delay settings in a storage device (18). The delayed data is input to a system model (26') which is operable in a training mode or a runtime mode. In the training mode, training data is stored in a data file (10) and retrieved therefrom for preprocessing and delay and then input to the system model (26'). Model parameters are learned and then stored in the storage device (22). During the training mode, the preprocess parameters are defined and stored in a storage device (14) in a particular sequence and delay settings are determined in the storage device (18).Type: GrantFiled: August 21, 1997Date of Patent: December 14, 1999Assignee: Pavilion TechnologiesInventors: James D. Keeler, Eric J. Hartman, Steven A. O'Hara, Jill L. Kempf, Devendra B. Godbole
-
Patent number: 5825646Abstract: A distributed control system (14) receives on the input thereof the control inputs and then outputs control signals to a plant (10) for the operation thereof. The measured variables of the plant and the control inputs are input to a predictive model (34) that operates in conjunction with an inverse model (36) to generate predicted control inputs. The predicted control inputs are processed through a filter (46) to apply hard constraints and sensitivity modifiers, the values of which are received from a control parameter block (22). During operation, the sensitivity of output variables on various input variables is determined. This information can be displayed and then the user allowed to select which of the input variables constitute the most sensitive input variables. These can then be utilized with a control network (470) to modify the predicted values of the input variables. Additionally, a neural network (406) can be trained on only the selected input variables that are determined to be the most sensitive.Type: GrantFiled: June 3, 1996Date of Patent: October 20, 1998Assignee: Pavilion Technologies, Inc.Inventors: James David Keeler, Eric J. Hartman, Kadir Liano
-
Patent number: 5729661Abstract: A preprocessing system for preprocessing input data to a neural network includes a training system for training a model (20) on data from a data file (10). The data is first preprocessed in a preprocessor (12) to fill in bad or missing data and merge all the time values on a common time scale. The preprocess operation utilizes preprocessing algorithms and time merging algorithms which are stored in a storage area (14). The output of the preprocessor (12) is then delayed in a delay block (16) in accordance with delay settings in storage area (18). These delayed outputs are then utilized to train the model (20), the model parameter is then stored in a storage area (22) during run time, a distributed control system (24) outputs the data to a preprocess block (34) and then preprocesses data in accordance with the algorithms in storage area (14). These outputs are then delayed in accordance with a delay block (36) with the delay settings (18).Type: GrantFiled: January 25, 1993Date of Patent: March 17, 1998Assignee: Pavilion Technologies, Inc.Inventors: James D. Keeler, Eric J. Hartman, Steven A. O'Hara, Jill L. Kempf, Devendra B. Godbole
-
Patent number: 5613041Abstract: A neural network system is provided that models the system in a system model (12) with the output thereof providing a predicted output. This predicted output is modified or controlled by an output control (14). Input data is processed in a data preprocess step (10) to reconcile the data for input to the system model (12). Additionally, the error resulted from the reconciliation is input to an uncertainty model to predict the uncertainty in the predicted output. This is input to a decision processor (20) which is utilized to control the output control (14). The output control (14) is controlled to either vary the predicted output or to inhibit the predicted output whenever the output of the uncertainty model (18) exceeds a predetermined decision threshold, input by a decision threshold block (22).Type: GrantFiled: September 20, 1995Date of Patent: March 18, 1997Assignee: Pavilion Technologies, Inc.Inventors: James D. Keeler, Eric J. Hartman, Ralph B. Ferguson
-
Patent number: 5559690Abstract: A plant (72) is operable to receive control inputs c(t) and provide an output y(t). The plant (72) has associated therewith state variables s(t) that are not variable. A control network (74) is provided that accurately models the plant (72). The output of the control network (74) provides a predicted output which is combined with a desired output to generate an error. This error is back propagated through an inverse control network (76), which is the inverse of the control network (74) to generate a control error signal that is input to a distributed control system (73) to vary the control inputs to the plant (72) in order to change the output y(t) to meet the desired output. The control network (74) is comprised of a first network NET 1 that is operable to store a representation of the dependency of the control variables on the state variables. The predicted result is subtracted from the actual state variable input and stored as a residual in a residual layer (102).Type: GrantFiled: September 16, 1994Date of Patent: September 24, 1996Assignee: Pavilion Technologies, Inc.Inventors: James D. Keeler, Eric J. Hartman, Kadir Liano, Ralph B. Ferguson
-
Patent number: 5479573Abstract: A predictive network is disclosed for operating in a runtime mode and in a training mode. The network includes a preprocessor (34') for preprocessing input data in accordance with parameters stored in a storage device (14') for output as preprocessed data to a delay device (36'). The delay device (36') provides a predetermined amount of delay as defined by predetermined delay settings in a storage device (18). The delayed data is input to a system model (26') which is operable in a training mode or a runtime mode. In the training mode, training data is stored in a data file (10) and retrieved therefrom for preprocessing and delay and then input to the system model (26'). Model parameters are learned and then stored in the storage device (22). During the training mode, the preprocess parameters are defined and stored in a storage device (14) in a particular sequence and delay settings are determined in the storage device (18).Type: GrantFiled: January 25, 1993Date of Patent: December 26, 1995Assignee: Pavilion Technologies, Inc.Inventors: James D. Keeler, Eric J. Hartman, Steven A. O'Hara, Jill L. Kempf, Devandra B. Godbole
-
Patent number: 5353207Abstract: A plant (72) is operable to receive control inputs c(t) and provide an output y(t). The plant (72) has associated therewith state variables s(t) that are not variable. A control network (74) is provided that accurately models the plant (72). The output of the control network (74) provides a predicted output which is combined with a desired output to generate an error. This error is back propagated through an inverse control network (76), which is the inverse of the control network (74) to generate a control error signal that is input to a distributed control system (73) to vary the control inputs to the plant (72) in order to change the output y(t) to meet the desired output. The control network (74) is comprised of a first network NET 1 that is operable to store a representation of the dependency of the control variables on the state variables. The predicted result is subtracted from the actual state variable input and stored as a residual in a residual layer (102).Type: GrantFiled: June 10, 1992Date of Patent: October 4, 1994Assignee: Pavilion Technologies, Inc.Inventors: James D. Keeler, Eric J. Hartman, Kadir Liano, Ralph B. Ferguson
-
Patent number: 5253328Abstract: A neural network content-addressable error-correcting memory system is disclosed including a plurality of hidden and visible processing units interconnected via a linear interconnection matrix. The network is symmetric and all self-connections are not present. All connections between processing units are present, except those connecting hidden units to other hidden units. Each visible unit is connected to each other visible unit and to each hidden unit. A mean field theory learning and retrieval algorithm is also provided. Bit patterns or code words are stored in the network via the learning algorithm. The retrieval algorithm retrieves error-corrected bit patterns in response to noisy or error-containing input bit patterns.Type: GrantFiled: November 17, 1989Date of Patent: October 12, 1993Assignee: Microelectronics & Computer Technology Corp.Inventor: Eric J. Hartman
-
Patent number: 5113483Abstract: A neural network includes an input layer comprising a plurality of input units (24) interconnected to a hidden layer with a plurality of hidden units (26) disposed therein through an interconnection matrix (28). Each of the hidden units (26) is a single output that is connected to output units (32) in an output layer through an interconnection matrix (30). Each of the interconnections between one of the hidden units (26) to one of the output units (32) has a weight associated therewith. Each of the hidden units (26) has an activation in the i'th dimension and extending across all the other dimensions in a non-localized manner in accordance with the following equation: ##EQU1## that the network learns by the Back Propagation method to vary the output weights and the parameters of the activation function .mu..sub.hi and .sigma..sub.hi.Type: GrantFiled: June 15, 1990Date of Patent: May 12, 1992Assignee: Microelectronics and Computer Technology CorporationInventors: James D. Keeler, Eric J. Hartman