Patents by Inventor Paolo Faraboschi

Paolo Faraboschi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10516806
    Abstract: A color image is processed into a renderable image. The color image comprises a plurality of pixels. Each pixel has colorimetry defined in a first color space. The renderable image comprises a plurality of renderable pixels defined by a device-vector in a second color space. For each pixel: a device-vector defined in the second color space is selected (301) based on the colorimetry defined in a first color space of the pixel. The device-vector comprises a plurality of elements. Each element includes an identifier and an accumulated weighting. An element of the selected device-vector is reselected (303) until the accumulated weighting (a) is greater than a threshold value (t) associated with the pixel (305). The levels for each color of the second color space (or mappings) for the currently selected (307) element of the selected device-vector is determined (309) to convert the pixel into a renderable pixel.
    Type: Grant
    Filed: June 28, 2013
    Date of Patent: December 24, 2019
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Juan Manuel Garcia Reyero Viñas, Paolo Faraboschi, Jan Morovic, Peter Morovic
  • Publication number: 20190324857
    Abstract: In some examples, with respect to adaptive multi-level checkpointing, a transfer parameter associated with transfer of checkpoint data from a node-local storage to a parallel file system may be ascertained for the checkpoint data stored in the node-local storage. The transfer parameter may be compared to a specified transfer parameter threshold. A determination may be made, based on the comparison of the transfer parameter to the specified transfer parameter threshold, as to whether to transfer the checkpoint data from the node-local storage to the parallel file system.
    Type: Application
    Filed: April 23, 2018
    Publication date: October 24, 2019
    Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Cong XU, Itir AKGUN, Paolo FARABOSCHI
  • Patent number: 10452472
    Abstract: A dot-product engine (DPE) implemented on an integrated circuit as a crossbar array (CA) includes memory elements comprising a memristor and a transistor in series. A crossbar with N rows, M columns may have N×M memory elements. A vector input for N voltage inputs to the CA and a vector output for M voltage outputs from the CA. An analog-to-digital converter (ADC) and/or a digital-to-analog converter (DAC) may be coupled to each input/output register. Values representing a first matrix may be stored in the CA. Voltages/currents representing a second matrix may be applied to the crossbar. Ohm's Law and Kirchoff's Law may be used to determine values representing the dot-product as read from the crossbar. A portion of the crossbar may perform Error-correcting Codes (ECC) concurrently with calculating the dot-product results. ECC codes may be used to only indicate detection of errors, or for both detection and correction of results.
    Type: Grant
    Filed: June 4, 2018
    Date of Patent: October 22, 2019
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Catherine Graves, John Paul Strachan, Dejan S. Milojicic, Paolo Faraboschi, Martin Foltin, Sergey Serebryakov
  • Patent number: 10324644
    Abstract: Examples described herein include receiving an operation pipeline for a computing system and building a graph that comprises a model for a number of potential memory side accelerator thread assignments to carry out the operation pipeline. The computing system may comprise at least two memories and a number of memory side accelerators. Each model may comprise a number of steps and at least one step out of the number of steps in each model may comprise a function performed at one memory side accelerator out of the number of memory side accelerators. Examples described herein also include determining a cost of at least one model.
    Type: Grant
    Filed: March 31, 2017
    Date of Patent: June 18, 2019
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Kaisheng Ma, Qiong Cai, Cong Xu, Paolo Faraboschi
  • Patent number: 10324722
    Abstract: Example implementations relate to global capabilities transferrable across node boundaries. For example, in an implementation, a switch that routes traffic between a node and global memory may receive an instruction from the node. The switch may recognize that data referenced by the instruction is a global capability, and the switch may process that global capability accordingly.
    Type: Grant
    Filed: June 24, 2016
    Date of Patent: June 18, 2019
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Dejan S Milojicic, Paolo Faraboschi, Chris I Dalton
  • Patent number: 10303622
    Abstract: Techniques for writing data to a subset of memory devices are described. In one aspect, a block of data to be written to a line in a rank of memory may be received. The rank of memory may comprise a set of memory devices. The block of data may be compressed. The compressed block of data may be written to a subset of the memory devices that comprise the line. The unwritten portions of the line may not be used to store valid data.
    Type: Grant
    Filed: March 6, 2015
    Date of Patent: May 28, 2019
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Rajeev Balasubramonian, Naveen Muralimanohar, Gregg B. Lesartre, Paolo Faraboschi, Jishen Zhao
  • Patent number: 10282302
    Abstract: Examples disclosed herein relate to programmable memory-side cache management. Some examples disclosed herein may include a programmable memory-side cache and a programmable memory-side cache controller. The programmable memory-side cache may locally store data of a system memory. The programmable memory-side cache controller may include programmable processing cores, each of the programmable processing cores configurable by cache configuration codes to manage the programmable memory-side cache for different applications.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: May 7, 2019
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Qiong Cai, Paolo Faraboschi
  • Publication number: 20190129864
    Abstract: According to examples, a system may include a central processing unit (CPU) and a capability enforcement controller in communication with the CPU. The capability enforcement controller may be separate from the CPU and may implement capability processing functions that control capabilities. Capabilities may be defined as unforgeable tokens of authority that protect access by the CPU to a physical address at which the data is stored in a memory.
    Type: Application
    Filed: October 31, 2017
    Publication date: May 2, 2019
    Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Paolo FARABOSCHI, Dejan S. MILOJICIC, Kirk M. BRESNIKER
  • Patent number: 10254988
    Abstract: Techniques for memory device writes based on mapping are provided. In one aspect, a block of data to be written to a line in a rank of memory may be received. The rank of memory may comprise multiple memory devices. The block of data may be written to a number of memory devices determined by the size of the block of data. A memory device mapping for the line may be retrieved. The mapping may determine the order in which the block of data is written to the memory devices within the rank. The block of data may be written to the memory devices based on the mapping.
    Type: Grant
    Filed: March 12, 2015
    Date of Patent: April 9, 2019
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Rajeev Balasubramonian, Gregg B. Lesartre, Robert Schreiber, Jishen Zhao, Naveen Muralimanohar, Paolo Faraboschi
  • Patent number: 10241911
    Abstract: Examples described herein relate to caching in a system with multiple nodes sharing a globally addressable memory. The globally addressable memory includes multiple windows that each include multiple chunks. Each node of a set of the nodes includes a cache that is associated with one of the windows. One of the nodes includes write access to one of the chunks of the window. The other nodes include read access to the chunk. The node with write access further includes a copy of the chunk in its cache and modifies multiple lines of the chunk copy. After a first line of the chunk copy is modified, a notification is sent to the other nodes that the chunk should be marked dirty. After multiple lines are modified, an invalidation message is sent for each of the modified lines of the set of the nodes.
    Type: Grant
    Filed: August 24, 2016
    Date of Patent: March 26, 2019
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Gabriel Parmer, Paolo Faraboschi, Dejan S Milojicic
  • Publication number: 20190065408
    Abstract: Example implementations relate to a capability enforcement processor. In an example, a capability enforcement processor may be interposed between a memory that stores data accessible via capabilities and a system processor that executes processes. The capability enforcement processor intercepts a memory request from the system processor and enforces the memory request based on capability enforcement processor capabilities maintained in per-process capability spaces of the capability enforcement processor.
    Type: Application
    Filed: August 31, 2017
    Publication date: February 28, 2019
    Inventors: Dejan S Milojicic, Chris I Dalton, Paolo Faraboschi, Kirk M. Bresniker
  • Publication number: 20190056872
    Abstract: Techniques for reallocating a memory pending queue based on stalls are provided. In one aspect, it may be determined at a memory stop of a memory fabric that at least one class of memory access is stalled. It may also be determined at the memory stop of the memory fabric that there is at least one class of memory access that is not stalled. At least a portion of a memory pending queue may be reallocated from the class of memory access that is not stalled to the class of memory access that is stalled.
    Type: Application
    Filed: October 22, 2018
    Publication date: February 21, 2019
    Inventors: Qiong Cai, Paolo Faraboschi, Cong Xu, Ping Ping, Sai Rahul Chalamalasetti, Andrew C. Walton
  • Publication number: 20190034239
    Abstract: In one example, a central processing unit (CPU) with dynamic thread mapping includes a set of multiple cores each with a set of multiple threads. A set of registers for each of the multiple threads monitors for in-flight memory requests the number of loads from and stores to at least a first memory interface and a second memory interface by each respective thread. The second memory interface has a greater latency than the first memory interface. The CPU further has logic to map and migrate each thread to respective CPU cores where the number of cores accessing only one of the at least first and second memory interfaces is maximized.
    Type: Application
    Filed: April 27, 2016
    Publication date: January 31, 2019
    Inventors: Qiong Cai, Charles Johnson, Paolo Faraboschi
  • Publication number: 20190012222
    Abstract: In some examples, a controller includes a counter to track errors associated with a group of memory access operations, and processing logic to detect an error associated with the group of memory access operations, determine whether the detected error causes an error state change of the group of memory access operations, and cause advancing of the counter responsive to determining that the detected error causes the error state change of the group of memory access operations.
    Type: Application
    Filed: July 10, 2017
    Publication date: January 10, 2019
    Inventors: Derek Alan Sherlock, Shawn Walker, Paolo Faraboschi
  • Publication number: 20180349051
    Abstract: A computing device includes a coherence controller and memory comprising a coherent memory region and a non-coherent memory region. The coherence controller may: determine a coherent region of the memory, determine a non-coherent region of the memory, and responsive to receiving a memory allocation request for a block of memory in the memory: allocate, based on a received memory allocation request for a memory block, the requested block of memory in the non-coherent memory region or the coherent memory region based on whether the memory allocation request indicates the requested block is to be coherent or non-coherent.
    Type: Application
    Filed: February 5, 2016
    Publication date: December 6, 2018
    Inventors: Alexandros Alexandros, Paolo Faraboschi
  • Patent number: 10146699
    Abstract: Apertures of a first size in a first physical address space of at least one processor are mapped to respective blocks of the first size in a second address space of a storage medium. Apertures of a second size in the first physical address space are mapped to respective blocks of the second size in the second address space, the second size being different from the first size.
    Type: Grant
    Filed: April 30, 2015
    Date of Patent: December 4, 2018
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Mark David Lillibridge, Paolo Faraboschi
  • Publication number: 20180336034
    Abstract: In one example in accordance with the present disclosure, a compute engine block may comprise a data port connecting a processing core to a data cache, wherein the data port receives requests for accessing a memory and a data communication pathway to enable servicing of data requests of the memory. The processing core may be configured to identify a value in a predetermined address range of a first data request and adjust the bit size of a load instruction used by the processing core when a first value is identified.
    Type: Application
    Filed: May 17, 2017
    Publication date: November 22, 2018
    Inventors: Craig Warner, Qiong Cai, Paolo Faraboschi, Gregg B Lesartre
  • Patent number: 10127282
    Abstract: A bit vector for a Bloom filter is determined by performing one or more hash function operations on a set of ternary content addressable memory (TCAM) words. A TCAM array is partitioned into a first portion to store the bit vector for the Bloom filter and a second portion to store the set of TCAM words. The TCAM array can be searched using a search word by performing the one or more hash function operations on the search word to generate a hashed search word and determining whether bits at specified positions of the hashed search word match bits at corresponding positions of the bit vector stored in the first portion of the TCAM array before searching the second portion of the TCAM array with the search word.
    Type: Grant
    Filed: April 30, 2014
    Date of Patent: November 13, 2018
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Sheng Li, Kevin T. Lim, Dejan S. Milojicic, Paolo Faraboschi
  • Patent number: 10108351
    Abstract: Techniques for reallocating a memory pending queue based on stalls are provided. In one aspect, it may be determined at a memory stop of a memory fabric that at least one class of memory access is stalled. It may also be determined at the memory stop of the memory fabric that there is at least one class of memory access that is not stalled. At least a portion of a memory pending queue may be reallocated from the class of memory access that is not stalled to the class of memory access that is stalled.
    Type: Grant
    Filed: June 23, 2016
    Date of Patent: October 23, 2018
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Qiong Cai, Paolo Faraboschi, Cong Xu, Ping Chi, Sai Rahul Chalamalasetti, Andrew C. Walton
  • Publication number: 20180285011
    Abstract: Examples described herein include receiving an operation pipeline for a computing system and building a graph that comprises a model for a number of potential memory side accelerator thread assignments to carry out the operation pipeline. The computing system may comprise at least two memories and a number of memory side accelerators. Each model may comprise a number of steps and at least one step out of the number of steps in each model may comprise a function performed at one memory side accelerator out of the number of memory side accelerators. Examples described herein also include determining a cost of at least one model.
    Type: Application
    Filed: March 31, 2017
    Publication date: October 4, 2018
    Inventors: Kaisheng Ma, Qiong Cai, Cong Xu, Paolo Faraboschi