Patents by Inventor Simon N. Peffers

Simon N. Peffers has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11836721
    Abstract: In some examples, an apparatus uses a blockchain to agree on a time in an information exchange network. A first node includes a processor communicatively coupled to a storage device including instructions. When executed by the processor, the instructions cause the processor to verify a time estimate from each of one or more other node, to determine a time match of a time estimate of the first node with the time estimates from the one or more other node, and if the time match is determined, to commit to the blockchain a transaction that includes a time stamp.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: December 5, 2023
    Assignee: Intel Corporation
    Inventors: Ned M. Smith, Rajesh Poornachandran, Michael Nolan, Simon N. Peffers
  • Patent number: 11728665
    Abstract: In some examples, a control unit is configured to adjust charge termination voltage of a rechargeable energy storage device. The control unit is adapted to charge the rechargeable energy storage device to a charge termination voltage where the rechargeable energy storage device has capacity to support peak load but comes close to a system shutdown voltage after supporting peak load. The control unit is also adapted to increase the charge termination voltage if a voltage of the rechargeable energy storage device is near a system shutdown voltage after supporting peak load.
    Type: Grant
    Filed: April 15, 2021
    Date of Patent: August 15, 2023
    Assignee: Intel Corporation
    Inventors: Naoki Matsumura, Simon N. Peffers, Steven Lloyd, Michael T. Crocker, Aaron Gorius
  • Patent number: 11494320
    Abstract: Apparatus, systems and methods for implementing delayed decompression schemes. As a burst of packets comprising compressed packets and uncompressed packets are received over an interconnect link, they are buffered in a receive buffer without decompression. Subsequently, the packets are forwarded from the receive buffer to a consumer such as processor core, with the compressed packets being decompressed prior to reaching the processor core. Under a first delayed decompression approach, packets are decompressed when they are read from the receive buffer in conjunction with forwarding the uncompressed packet (or uncompressed data contained therein) to the consumer. Under a second delayed decompression scheme, the packets are read from the receive buffer and forwarded to a decompressor using a first datapath width matching the width of the packets, decompressed, and then forwarded to the consumer using a second datapath width matching the width of the uncompressed data.
    Type: Grant
    Filed: September 24, 2018
    Date of Patent: November 8, 2022
    Assignee: Intel Corporation
    Inventors: Simon N Peffers, Kirk S Yap, Sean Gulley, Vinodh Gopal, Wajdi Feghali
  • Patent number: 11108406
    Abstract: In one embodiment, an apparatus includes: a compression circuit to compress data blocks of one or more traffic classes; and a control circuit coupled to the compression circuit, where the control circuit is to enable the compression circuit to concurrently compress data blocks of a first traffic class and not to compress data blocks of a second traffic class. Other embodiments are described and claimed.
    Type: Grant
    Filed: June 19, 2019
    Date of Patent: August 31, 2021
    Assignee: Intel Corporation
    Inventors: Simon N. Peffers, Vinodh Gopal, Kirk Yap
  • Patent number: 11100483
    Abstract: Systems and methods for exchanging digital content in an online layered hierarchical market and exchange network are disclosed. A Buyer utilizes one or more Curry functions that are relevant to content to be acquired thereby developing a Margin Future estimate for the received content. Each e-market layer in the hierarchy adds value to the content for use with one or more other e-market layers. Value is added by executing a Margin Function including a Curry function on the content, as defined in the Margin Future. An embodiment includes data, information, knowledge, and wisdom (DIKW) e-market layers. The Margin Future estimate may be recorded with an escrow agent acting as an intermediary with Investors. Once funded, the Buyer may acquire the content from the Seller and apply value added. Payment may be made when the value added content enters the e-market using electronic wallets. Other embodiments are described and claimed.
    Type: Grant
    Filed: December 29, 2017
    Date of Patent: August 24, 2021
    Assignee: Intel Corporation
    Inventors: Ned M. Smith, Rajesh Poornachandran, Michael Nolan, Simon N. Peffers
  • Publication number: 20210234377
    Abstract: In some examples, a control unit is configured to adjust charge termination voltage of a rechargeable energy storage device. The control unit is adapted to charge the rechargeable energy storage device to a charge termination voltage where the rechargeable energy storage device has capacity to support peak load but comes close to a system shutdown voltage after supporting peak load. The control unit is also adapted to increase the charge termination voltage if a voltage of the rechargeable energy storage device is near a system shutdown voltage after supporting peak load.
    Type: Application
    Filed: April 15, 2021
    Publication date: July 29, 2021
    Applicant: Intel Corporation
    Inventors: Naoki Matsumura, Simon N. Peffers, Steven Lloyd, Michael T. Crocker, Aaron Gorius
  • Patent number: 10985587
    Abstract: In some examples, a control unit is configured to adjust charge termination voltage of a rechargeable energy storage device. The control unit is adapted to charge the rechargeable energy storage device to a charge termination voltage where the rechargeable energy storage device has capacity to support peak load but comes close to a system shutdown voltage after supporting peak load. The control unit is also adapted to increase the charge termination voltage if a voltage of the rechargeable energy storage device is near a system shutdown voltage after supporting peak load.
    Type: Grant
    Filed: August 7, 2018
    Date of Patent: April 20, 2021
    Assignee: Intel Corporation
    Inventors: Naoki Matsumura, Simon N. Peffers, Steven Lloyd, Michael T. Crocker, Aaron Gorius
  • Patent number: 10922079
    Abstract: Data element filter logic (“hardware accelerator”) in a processor that offloads computation for an in-memory database select/extract operation from a Central Processing Unit (CPU) core in the processor is provided. The Data element filter logic provides a balanced performance across an entire range of widths (number of bits) of data elements in a column-oriented Database Management System.
    Type: Grant
    Filed: December 28, 2017
    Date of Patent: February 16, 2021
    Assignee: Intel Corporation
    Inventors: Vinodh Gopal, Kirk S. Yap, James Guilford, Simon N. Peffers
  • Patent number: 10761877
    Abstract: Methods and apparatuses relating to accelerating blockchain transactions are described. In one embodiment, a processor includes a hardware accelerator to execute an operation of a blockchain transaction, and the hardware accelerator includes a dispatcher circuit to route the operation to a transaction processing circuit when the operation is a transaction operation and route the operation to a block processing circuit when the operation is a block operation. In another embodiment, a processor includes a hardware accelerator to execute an operation of a blockchain transaction; and a network interface controller including a dispatcher circuit to route the operation to a transaction processing circuit of the hardware accelerator when the operation is a transaction operation and route the operation to a block processing circuit of the hardware accelerator when the operation is a block operation.
    Type: Grant
    Filed: January 30, 2018
    Date of Patent: September 1, 2020
    Assignee: INTEL CORPORATION
    Inventors: Simon N. Peffers, Sean M. Gulley
  • Patent number: 10628068
    Abstract: Technologies for database acceleration include a computing device having a database accelerator. The database accelerator performs a decompress operation on one or more compressed elements of a compressed database to generate one or more decompressed elements. After decompression of the compressed elements, the database accelerator prepares the one or more decompressed elements to generate one or more prepared elements to be processed by an accelerated filter. The database accelerator then performs the accelerated filter on the one or more prepared elements to generate one or more output elements. Other embodiments are described and claimed.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: April 21, 2020
    Assignee: Intel Corporation
    Inventors: Vinodh Gopal, James D. Guilford, Kirk S. Yap, Simon N. Peffers, Daniel F. Cutter
  • Publication number: 20200052494
    Abstract: In some examples, a control unit is configured to adjust charge termination voltage of a rechargeable energy storage device. The control unit is adapted to charge the rechargeable energy storage device to a charge termination voltage where the rechargeable energy storage device has capacity to support peak load but comes close to a system shutdown voltage after supporting peak load. The control unit is also adapted to increase the charge termination voltage if a voltage of the rechargeable energy storage device is near a system shutdown voltage after supporting peak load.
    Type: Application
    Filed: August 7, 2018
    Publication date: February 13, 2020
    Applicant: Intel Corporation
    Inventors: Naoki Matsumura, Simon N. Peffers, Steven Lloyd, Michael T. Crocker, Aaron Gorius
  • Patent number: 10462110
    Abstract: In one embodiment, an apparatus includes: a device having a physically unclonable function (PUF) circuit including a plurality of PUF cells to generate a PUF sample responsive to at least one control signal; a controller coupled to the device, the controller to send the at least one control signal to the PUF circuit and to receive a plurality of PUF samples from the PUF circuit; a buffer having a plurality of entries each to store at least one of the plurality of PUF samples; and a filter to filter the plurality of PUF samples to output a filtered value, wherein the controller is to generate a unique identifier for the device based at least in part on the filtered value. Other embodiments are described and claimed.
    Type: Grant
    Filed: February 16, 2017
    Date of Patent: October 29, 2019
    Assignee: Intel Corporation
    Inventors: Simon N. Peffers, Sean M. Gulley, Vinodh Gopal, Sanu K. Mathew
  • Publication number: 20190305797
    Abstract: In one embodiment, an apparatus includes: a compression circuit to compress data blocks of one or more traffic classes; and a control circuit coupled to the compression circuit, where the control circuit is to enable the compression circuit to concurrently compress data blocks of a first traffic class and not to compress data blocks of a second traffic class. Other embodiments are described and claimed.
    Type: Application
    Filed: June 19, 2019
    Publication date: October 3, 2019
    Inventors: Simon N. Peffers, Vinodh Gopal, Kirk Yap
  • Publication number: 20190243780
    Abstract: Methods and apparatus for scalable application-customized memory compression. Data is selectively stored in system memory using compressed formats or uncompressed format using a plurality of compression schemes. A compression ID is used to identify the compression scheme (or no compression) to be used and included with read and write requests submitted to a memory controller. For memory writes, the memory controller dynamically compresses data written to memory cache lines using compression algorithms (or no compression) identified by compression ID. For memory reads, the memory controller dynamically decompresses data stored memory cache lines in compressed formats using decompression algorithms identified by the compression ID. Page tables and TLB entries are augments to include a compression ID field. The format of memory cache lines includes a compression metabit indicating whether the data in the cache line is compressed.
    Type: Application
    Filed: April 10, 2019
    Publication date: August 8, 2019
    Inventors: Vinodh Gopal, Simon N. Peffers
  • Patent number: 10365708
    Abstract: Methods and apparatuses related to guardband recovery using in situ characterization are disclosed. In one example, a system includes a target circuit, a voltage regulator to provide a variable voltage to, a phase-locked loop (PLL) to provide a variable clock to, and a temperature sensor to sense a temperature of the target circuit, and a control circuit, wherein the control circuit is to set up a characterization environment by setting a temperature, voltage, clock frequency, and workload of the target circuit, execute a plurality of tests on the target circuit, when the target circuit passes the plurality of tests, adjust the variable voltage to increase a likelihood of the target circuit failing the plurality of tests and repeat the plurality of tests, and when the target circuit fails the plurality of tests, adjust the variable voltage to decrease a likelihood of the target circuit failing the plurality of tests.
    Type: Grant
    Filed: December 14, 2016
    Date of Patent: July 30, 2019
    Assignee: Intel Corporation
    Inventors: Simon N. Peffers, Sean M. Gulley, Thomas L. Dmukauskas, Aaron Gorius, Vinodh Gopal
  • Publication number: 20190188132
    Abstract: Various systems and methods for hardware acceleration circuitry are described. In an embodiment, circuitry is to perform 1-bit comparisons of elements of variable M-bit width aligned to N-bit width, where N is a power of 2, in a data path of P-bit width. Second and subsequent scan stages use the comparison results from the previous stage to perform 1-bit comparison of adjacent results, so that each subsequent stage results in a full comparison of element widths double that of the previous stage. A total number of stages required to scan, or filter, M-bit elements in N-bit width lanes is equal 1+log 2(N), and the total number of stages required for implementation in the circuitry is 1+log 2(P), where P is the maximum width of the data path comprising 1 to P elements.
    Type: Application
    Filed: December 18, 2017
    Publication date: June 20, 2019
    Inventors: Kirk Yap, James D. Guilford, Simon N. Peffers, Vinodh Gopal
  • Publication number: 20190102837
    Abstract: Various systems and methods for exchanging digital information in an online competitive data market and exchange network are disclosed. A buyer utilizes one or more curry functions that are relevant to data to be acquired thereby developing a Future estimate for the data. The Future estimate may be recorded as a Margin Future with an escrow agent acting as an intermediary with investors. Investors may fund the Margin Future based on assessed risk and return on investment as defined in the Margin Future. Once funded, the buyer may acquire the data from the seller and apply value to the data by applying the curry functions, to result in digital information to be traded on the online exchange. Once the Future has been realized by sales to information consumers, the market may distribute the proceeds/profits among the seller, buyer, investor and escrow agent, according to conditions defined in the Margin Future.
    Type: Application
    Filed: September 29, 2017
    Publication date: April 4, 2019
    Inventors: Ned M. Smith, Rajesh Poornachandran, Michael Nolan, Simon N. Peffers
  • Publication number: 20190043050
    Abstract: In some examples, an apparatus uses a blockchain to agree on a time in an information exchange network. A first node includes a processor communicatively coupled to a storage device including instructions. When executed by the processor, the instructions cause the processor to verify a time estimate from each of one or more other node, to determine a time match of a time estimate of the first node with the time estimates from the one or more other node, and if the time match is determined, to commit to the blockchain a transaction that includes a time stamp.
    Type: Application
    Filed: June 29, 2018
    Publication date: February 7, 2019
    Applicant: INTEL CORPORATION
    Inventors: Ned M. Smith, Rajesh Poornachandran, Michael Nolan, Simon N. Peffers
  • Publication number: 20190042496
    Abstract: Apparatus, systems and methods for implementing delayed decompression schemes. As a burst of packets comprising compressed packets and uncompressed packets are received over an interconnect link, they are buffered in a receive buffer without decompression. Subsequently, the packets are forwarded from the receive buffer to a consumer such as processor core, with the compressed packets being decompressed prior to reaching the processor core. Under a first delayed decompression approach, packets are decompressed when they are read from the receive buffer in conjunction with forwarding the uncompressed packet (or uncompressed data contained therein) to the consumer. Under a second delayed decompression scheme, the packets are read from the receive buffer and forwarded to a decompressor using a first datapath width matching the width of the packets, decompressed, and then forwarded to the consumer using a second datapath width matching the width of the uncompressed data.
    Type: Application
    Filed: September 24, 2018
    Publication date: February 7, 2019
    Inventors: Simon N. Peffers, Kirk S. Yap, Sean Gulley, Vinodh Gopal, Wajdi Feghali
  • Publication number: 20190034892
    Abstract: Systems and methods for exchanging digital content in an online layered hierarchical market and exchange network are disclosed. A Buyer utilizes one or more Curry functions that are relevant to content to be acquired thereby developing a Margin Future estimate for the received content. Each e-market layer in the hierarchy adds value to the content for use with one or more other e-market layers. Value is added by executing a Margin Function including a Curry function on the content, as defined in the Margin Future. An embodiment includes data, information, knowledge, and wisdom (DIKW) e-market layers. The Margin Future estimate may be recorded with an escrow agent acting as an intermediary with Investors. Once funded, the Buyer may acquire the content from the Seller and apply value added. Payment may be made when the value added content enters the e-market using electronic wallets. Other embodiments are described and claimed.
    Type: Application
    Filed: December 29, 2017
    Publication date: January 31, 2019
    Inventors: Ned M. Smith, Rajesh Poornachandran, Michael Nolan, Simon N. Peffers