Patents by Inventor Martin Foltin
Martin Foltin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240111970Abstract: In some examples, a device includes a first processing core comprising a resistive memory array to perform an analog computation, and a digital processing core comprising a digital memory programmable with different values to perform different computations responsive to respective different conditions. The device further includes a controller to selectively apply input data to the first processing core and the digital processing core.Type: ApplicationFiled: December 4, 2023Publication date: April 4, 2024Inventors: John Paul Strachan, Dejan S. Milojicic, Martin Foltin, Sai Rahul Chalamalasetti, Amit S. Sharma
-
Publication number: 20240112029Abstract: A crossbar array includes a number of memory elements. An analog-to-digital converter (ADC) is electronically coupled to the vector output register. A digital-to-analog converter (DAC) is electronically coupled to the vector input register. A processor is electronically coupled to the ADC and to the DAC. The processor may be configured to determine whether division of input vector data by output vector data from the crossbar array is within a threshold value, and if not within the threshold value, determine changed data values as between the output vector data and the input vector data, and write the changed data values to the memory elements of the crossbar array.Type: ApplicationFiled: December 5, 2023Publication date: April 4, 2024Inventors: Sai Rahul Chalamalasetti, Paolo Faraboschi, Martin Foltin, Catherine Graves, Dejan S. Milojicic, John Paul Strachan, Sergey Serebryakov
-
Patent number: 11947928Abstract: Systems and methods are provided for a multi-die dot-product engine (DPE) to provision large-scale machine learning inference applications. The multi-die DPE leverages a multi-chip architecture. For example, a multi-chip interface can include a plurality of DPE chips, where each DPE chip performs inference computations for performing deep learning operations. A hardware interface between a memory of a host computer and the plurality of DPE chips communicatively connects the plurality of DPE chips to the memory of the host computer system during an inference operation such that the deep learning operations are spanned across the plurality of DPE chips. Due to the multi-die architecture, multiple silicon devices are allowed to be used for inference, thereby enabling power-efficient inference for large-scale machine learning applications and complex deep neural networks.Type: GrantFiled: September 10, 2020Date of Patent: April 2, 2024Assignee: Hewlett Packard Enterprise Development LPInventors: Craig Warner, Eun Sub Lee, Sai Rahul Chalamalasetti, Martin Foltin
-
Patent number: 11861429Abstract: In some examples, a device includes a first processing core comprising a resistive memory array to perform an analog computation, and a digital processing core comprising a digital memory programmable with different values to perform different computations responsive to respective different conditions. The device further includes a controller to selectively apply input data to the first processing core and the digital processing core.Type: GrantFiled: April 30, 2018Date of Patent: January 2, 2024Assignee: Hewlett Packard Enterprise Development LPInventors: John Paul Strachan, Dejan S. Milojicic, Martin Foltin, Sai Rahul Chalamalasetti, Amit S. Sharma
-
Publication number: 20230418792Abstract: Systems and methods are provide for automatically constructing data lineage representations for distributed data processing pipelines. These data lineage representations (which are constructed and stored in a central repository shared by the multiple data processing sites) can be used to among other things, clone the distributed data processing pipeline for quality assurance or debugging purposes. Examples of the presently disclosed technology are able to construct data lineage representations for distributed data processing pipelines by (1) generating a hash content value for universally identifying each data artifact of the distributed data processing pipeline across the multiple processing stages/processing sites of the distributed data processing pipeline; and (2) creating an data processing pipeline abstraction hierarchy for associating each data artifact to input and output events for given executions of given data processing stages (performed by the multiple data processing sites).Type: ApplicationFiled: June 28, 2022Publication date: December 28, 2023Inventors: Annmary Justine KOOMTHANAM, Suparna Bhattacharya, Aalap Tripathy, Sergey Serebryakov, Martin Foltin, Paolo Faraboschi
-
Patent number: 11853846Abstract: A crossbar array includes a number of memory elements. An analog-to-digital converter (ADC) is electronically coupled to the vector output register. A digital-to-analog converter (DAC) is electronically coupled to the vector input register. A processor is electronically coupled to the ADC and to the DAC. The processor may be configured to determine whether division of input vector data by output vector data from the crossbar array is within a threshold value, and if not within the threshold value, determine changed data values as between the output vector data and the input vector data, and write the changed data values to the memory elements of the crossbar array.Type: GrantFiled: April 30, 2018Date of Patent: December 26, 2023Assignee: Hewlett Packard Enterprise Development LPInventors: Sai Rahul Chalamalasetti, Paolo Faraboschi, Martin Foltin, Catherine Graves, Dejan S. Milojicic, John Paul Strachan, Sergey Serebryakov
-
Publication number: 20230133722Abstract: Systems and methods are provided for creating and sharing knowledge among design houses. In particular, examples of the presently disclosed technology leverage the concepts of meta-optimizing and collaborative learning to reduce the computational burden shouldered by individual design houses using inverse design techniques to find optimal designs in a manner which protects intellectual property sensitive information. Examples may share versions of a central meta-optimizer (i.e. local meta-optimizers) among design houses targeting different (but related) design tasks. A local meta-optimizer can be trained to indirectly optimize a design task by computing hyper-parameters for a design house's private optimization function. The private optimization function may be using inverse design techniques to find an optimal design for a design task. This may correspond to finding a global minimum of a cost function using gradient descent techniques or more advanced global optimization techniques.Type: ApplicationFiled: October 29, 2021Publication date: May 4, 2023Inventors: THOMAS VAN VAERENBERGH, PENG SUN, MARTIN FOLTIN, RAYMOND G. BEAUSOLEIL
-
Patent number: 11532356Abstract: A DPE memristor crossbar array system includes a plurality of partitioned memristor crossbar arrays. Each of the plurality of partitioned memristor crossbar arrays includes a primary memristor crossbar array and a redundant memristor crossbar array. The redundant memristor crossbar array includes values that are mathematically related to values within the primary memristor crossbar array. In addition, the plurality of partitioned memristor crossbar arrays includes a block of shared analog circuits coupled to the plurality of partitioned memristor crossbar arrays. The block of shared analog circuits is to determine a dot product value of voltage values generated by at least one partitioned memristor crossbar array of the plurality of partitioned memristor crossbar arrays.Type: GrantFiled: April 6, 2021Date of Patent: December 20, 2022Assignee: Hewlett Packard Enterprise Development LPInventors: Amit S. Sharma, John Paul Strachan, Catherine Graves, Suhas Kumar, Craig Warner, Martin Foltin
-
Patent number: 11475169Abstract: Examples described herein relate to a security system consistent with the disclosure. For instance, the security system may comprise a sensor interface bridge connecting a gateway to an input/output (I/O) card, a Field Programmable Gate Array (FPGA) to scan data to detect an anomaly in the data while the data is in the sensor interface bridge, where a learning neural network accelerator Application-Specific Integrated Circuit (ASIC) is integrated with the FPGA and send the data without an anomaly to the gateway.Type: GrantFiled: March 4, 2019Date of Patent: October 18, 2022Assignee: Hewlett Packard Enterprise Development LPInventors: Martin Foltin, Aalap Tripathy, Harvey Edward White, Jr., John Paul Strachan
-
Publication number: 20220276627Abstract: Systems and methods are provided for enabling coexistence of Information Technology (IT) systems and Operational Technology (OT) systems, where advanced computing functionality realized by the IT systems can be applied to legacy applications and incumbent hardware technologies resident in the OT systems. A distributed control node (DCN) implemented between the IT and OT systems may comprise a microcontroller system partitioned into two processor clusters. Microservices associated with the IT systems are provisioned to a high performance processor cluster, and legacy applications running bare metal associated with the OT systems are provisioned to a real-time processor cluster. Partitioning of the microcontroller system allows for interoperability between the microservices and the legacy applications.Type: ApplicationFiled: February 26, 2021Publication date: September 1, 2022Inventors: Martin FOLTIN, William EDWARD WHITE, Aalap TRIPATHY, Harvey EDWARD WHITE, JR.
-
Patent number: 11385863Abstract: Disclosed techniques provide for dynamically changing precision of a multi-stage compute process. For example, changing neural network (NN) parameters on a per-layer basis depending on properties of incoming data streams and per-layer performance of an NN among other considerations. NNs include multiple layers that may each be calculated with a different degree of accuracy and therefore, compute resource overhead (e.g., memory, processor resources, etc.). NNs are usually trained with 32-bit or 16-bit floating-point numbers. Once trained, an NN may be deployed in production. One approach to reduce compute overhead is to reduce parameter precision of NNs to 16 or 8 for deployment. The conversion to an acceptable lower precision is usually determined manually before deployment and precision levels are fixed while deployed.Type: GrantFiled: August 1, 2018Date of Patent: July 12, 2022Assignee: Hewlett Packard Enterprise Development LPInventors: Sai Rahul Chalamalasetti, Paolo Faraboschi, Martin Foltin, Catherine Graves, Dejan S. Milojicic, Sergey Serebryakov, John Paul Strachan
-
Patent number: 11322545Abstract: Devices and methods are provided. In one aspect, a device for driving a memristor array includes a substrate including a well having a bottom layer, a first wall and a second wall. The substrate is formed of a strained layer of a first semiconductor material. A vertical JFET is formed in the well. The vertical JFET includes a vertical gate region formed in a middle portion of the well with a gate region height less than a depth of the well. A channel region is formed of an epitaxial layer of a second semiconductor wrapped around the vertical gate region. Vertical source regions are formed on both sides of a first end of the vertical gate region, and vertical drain regions are formed on both sides of a second end of the vertical gate region.Type: GrantFiled: April 27, 2018Date of Patent: May 3, 2022Assignee: Hewlett Packard Enterprise Development LPInventors: Amit S. Sharma, John Paul Strachan, Martin Foltin
-
Patent number: 11294763Abstract: A computer system includes multiple memory array components that include respective analog memory arrays which are sequenced to implement a multi-layer process. An error array data structure is obtained for at least a first memory array component, and from which a determination is made as to whether individual nodes (or cells) of the error array data structure are significant. A determination can be made as to any remedial operations that can be performed to mitigate errors of significance.Type: GrantFiled: August 28, 2018Date of Patent: April 5, 2022Assignee: Hewlett Packard Enterprise Development LPInventors: John Paul Strachan, Catherine Graves, Dejan S. Milojicic, Paolo Faraboschi, Martin Foltin, Sergey Serebryakov
-
Publication number: 20220092393Abstract: Systems and methods are provided to improve traditional chip processing. Using crossbar computations, the convolution layer can be flattened into vectors, and the vectors can be grouped into a matrix where each row or column is a flattened filter. Each submatrix of the input corresponding to a position of a convolution window is also flattened into a vector. The convolution is computed as the dot product of each input vector and the filter matrix. Using intra-crossbar computations, the unused space of the crossbars is used to store replicas of the filters matrices and the unused space in XIN is used to store more elements of the input. In inter-crossbar computations, the unused crossbars are used to store replicas of the filters matrices and the unused XINs are used to store more elements of the input. Then, the method performs multiple convolution iterations in a single step.Type: ApplicationFiled: September 21, 2020Publication date: March 24, 2022Inventors: GLAUCIMAR DA SIKVA AGUIAR, FRANCISCO PLÍNIO OLIVEIRA SILVEIRA, EUN SUB LEE, RODRIGO JOSE DA ROSA ANTUNES, JOAQUIM GOMES DA COSTA EULALIO DE SOUZA, MARTIN FOLTIN, JEFFERSON RODRIGO ALVES CAVALCANTE, LUCAS LEITE, ARTHUR CARVALHO WALRAVEN DA CUNHA, MONYCKY VASCONCELOS FRAZAO, ALEX FERREIRA RAMIRES TRAJANO
-
Publication number: 20220075597Abstract: Systems and methods are provided for a multi-die dot-product engine (DPE) to provision large-scale machine learning inference applications. The multi-die DPE leverages a multi-chip architecture. For example, a multi-chip interface can include a plurality of DPE chips, where each DPE chip performs inference computations for performing deep learning operations. A hardware interface between a memory of a host computer and the plurality of DPE chips communicatively connects the plurality of DPE chips to the memory of the host computer system during an inference operation such that the deep learning operations are spanned across the plurality of DPE chips. Due to the multi-die architecture, multiple silicon devices are allowed to be used for inference, thereby enabling power-efficient inference for large-scale machine learning applications and complex deep neural networks.Type: ApplicationFiled: September 10, 2020Publication date: March 10, 2022Inventors: Craig Warner, Eun Sub Lee, Sai Rahul Chalamalasetti, Martin Foltin
-
Publication number: 20210390070Abstract: A universal industrial I/O interface bridge is provided. The universal industrial I/O interface bridge may be placed between a host and I/O interface cards to translate and manage electronic communications from these and other sources. Embodiments of the application may include (1) an improved hardware module, (2) an I/O discovery process to dynamically reprogram the universal industrial I/O interface bridge depending on the attached I/O card, (3) an abstraction process to illustrate the universal industrial I/O interface bridge and the physical I/O interfaces, (4) an alert plane within the universal industrial I/O interface bridge to respond to I/O alert pins, and (5) a secure distribution process for a firmware update of the universal industrial I/O interface bridge.Type: ApplicationFiled: June 16, 2020Publication date: December 16, 2021Inventors: Harvey Edward White, JR., Aalap Tripathy, Martin Foltin, William Edward White
-
Publication number: 20210240945Abstract: In some examples, a device includes a first processing core comprising a resistive memory array to perform an analog computation, and a digital processing core comprising a digital memory programmable with different values to perform different computations responsive to respective different conditions. The device further includes a controller to selectively apply input data to the first processing core and the digital processing core.Type: ApplicationFiled: April 30, 2018Publication date: August 5, 2021Applicant: Hewlett Packard Enterprise Development LPInventors: John Paul STRACHAN, Dejan S. MILOJICIC, Martin FOLTIN, Sai Rahul CHALAMALASETTI, Amit S. SHARMA
-
Publication number: 20210241068Abstract: A convolutional neural network system includes a first part of the convolutional neural network comprising an initial processor configured to process an input data set and store a weight factor set in the first part of the convolutional neural network; and a second part of the convolutional neural network comprising a main computing system configured to process an export data set provided from the first part of the convolutional neural network.Type: ApplicationFiled: April 30, 2018Publication date: August 5, 2021Applicant: Hewlett Packard Enterprise Development LPInventors: Martin FOLTIN, John Paul STRACHAN, Sergey SEREBRYAKOV
-
Publication number: 20210225440Abstract: A DPE memristor crossbar array system includes a plurality of partitioned memristor crossbar arrays. Each of the plurality of partitioned memristor crossbar arrays includes a primary memristor crossbar array and a redundant memristor crossbar array. The redundant memristor crossbar array includes values that are mathematically related to values within the primary memristor crossbar array. In addition, the plurality of partitioned memristor crossbar arrays includes a block of shared analog circuits coupled to the plurality of partitioned memristor crossbar arrays. The block of shared analog circuits is to determine a dot product value of voltage values generated by at least one partitioned memristor crossbar array of the plurality of partitioned memristor crossbar arrays.Type: ApplicationFiled: April 6, 2021Publication date: July 22, 2021Inventors: Amit S. Sharma, John Paul Strachan, Catherine Graves, Suhas Kumar, Craig Warner, Martin Foltin
-
Publication number: 20210201136Abstract: A crossbar array includes a number of memory elements. An analog-to-digital converter (ADC) is electronically coupled to the vector output register. A digital-to-analog converter (DAC) is electronically coupled to the vector input register. A processor is electronically coupled to the ADC and to the DAC. The processor may be configured to determine whether division of input vector data by output vector data from the crossbar array is within a threshold value, and if not within the threshold value, determine changed data values as between the output vector data and the input vector data, and write the changed data values to the memory elements of the crossbar array.Type: ApplicationFiled: April 30, 2018Publication date: July 1, 2021Inventors: Sai Rahul Chalamalasetti, Paolo Faraboschi, Martin Foltin, Catherine Graves, Dejan S. Milojicic, John Paul Strachan, Sergey Serebryakov