Patents by Inventor Vinodh Gopal

Vinodh Gopal has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190391869
    Abstract: A processing device comprising compression circuitry to: determine a compression configuration to compress source data; generate a checksum of the source data in an uncompressed state; compress the source data into at least one block based on the compression configuration, wherein the at least one block comprises: a plurality of sub-blocks, wherein the plurality of sub-block includes a predetermined size; a block header corresponding to the plurality of sub-blocks; and decompression circuitry coupled to the compression circuitry, wherein the decompression circuitry to: while not outputting a decompressed data stream of the source data: generate index information corresponding to the plurality of sub-blocks; in response to generating the index information, generate a checksum of the compressed source data associated with the plurality of sub-blocks; and determine whether the checksum of the source data in the uncompressed format matches the checksum of the compressed source data.
    Type: Application
    Filed: June 20, 2018
    Publication date: December 26, 2019
    Inventors: Vinodh Gopal, James Guilford, Daniel Cutter, Kirk Yap
  • Patent number: 10509651
    Abstract: A processor of an aspect includes a plurality of registers, and a decode unit to decode an instruction. The instruction is to indicate at least one storage location that is to store a first integer, a second integer, and a modulus. An execution unit is coupled with the decode unit, and coupled with the plurality of registers. The execution unit, in response to the instruction, is to store a Montgomery multiplication product corresponding to the first integer, the second integer, and the modulus, in a destination storage location. Other processors, methods, systems, and instructions are disclosed.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: December 17, 2019
    Assignee: Intel Corporation
    Inventor: Vinodh Gopal
  • Patent number: 10509580
    Abstract: Methods and apparatuses relating to memory compression and decompression are described, including a memory controller and methods for memory compression utilizing a hardware compression engine and a dictionary to indicate a zero value, full match, partial match, or no match. When indices for multiple sections are the same, an entry in the dictionary may be updated with the value of the section that is most recent, in the same order as in the block of data.
    Type: Grant
    Filed: April 1, 2016
    Date of Patent: December 17, 2019
    Assignee: Intel Corporation
    Inventors: Kirk S. Yap, Vinodh Gopal, James D. Guilford, Sean M. Gulley
  • Patent number: 10503510
    Abstract: A processor includes a decode unit to receive an instruction to indicate a first source packed data operand and a second source packed data operand. The source operands each to include elements. The data elements to include information selected from messages and logical combinations of messages that is sufficient to evaluate: P1(Wj?16 XOR Wj?9 XOR (Wj?3<<<15)) XOR(Wj?13<<<7)XOR Wj?6 P1 is a permutation function, P1(X)=X XOR (X<<<15) XOR (X<<<23). Wj?16, Wj?9, Wj?3, Wj?13, and Wj?6 are messages associated with a compression function of an SM3 hash function. XOR is an exclusive OR operation. <<< is a rotate operation. An execution unit coupled with the decode unit that is operable, in response to the instruction, to store a result packed data in a destination storage location. The result packed data to include a Wj message to be input to a round j of the compression function.
    Type: Grant
    Filed: December 27, 2013
    Date of Patent: December 10, 2019
    Assignee: Intel Corporation
    Inventors: Gilbert M. Wolrich, Vinodh Gopal, Kirk S. Yap, Wajdi K. Feghali, Sean Gulley
  • Patent number: 10496703
    Abstract: Techniques and apparatus for discrete compression and decompression processes are described. In one embodiment, for example, an apparatus may include at least one memory and logic, at least a portion of the logic comprised in hardware coupled to the at least one memory, the logic to determine a compression configuration to compress source data, generate discrete compressed data comprising at least one high-level block comprising a header and at least one discrete block based on the compression configuration, and generate index information for accessing the at least one discrete block. Other embodiments are described and claimed.
    Type: Grant
    Filed: June 27, 2017
    Date of Patent: December 3, 2019
    Assignee: INTEL CORPORATION
    Inventors: Vinodh Gopal, James D. Guilford
  • Patent number: 10496373
    Abstract: In one embodiment, a processor comprises a multiplier circuit to operate in an integer multiplication mode responsive to a first value of a configuration parameter; and operate in a carry-less multiplication mode responsive to a second value of the configuration parameter.
    Type: Grant
    Filed: December 28, 2017
    Date of Patent: December 3, 2019
    Assignee: Intel Corporation
    Inventors: Vikram B. Suresh, Sanu K. Mathew, Sudhir K. Satpathy, Vinodh Gopal
  • Publication number: 20190363733
    Abstract: Methods and apparatus to parallelize data decompression are disclosed. An example method selecting initial starting positions in a compressed data bitstream; adjusting a first one of the initial starting positions to determine a first adjusted starting position by decoding the bitstream starting at a training position in the bitstream, the decoding including traversing the bitstream from the training position as though first data located at the training position is a valid token; outputting first decoded data generated by decoding a first segment of the bitstream starting from the first adjusted starting position; and merging the first decoded data with second decoded data generated by decoding a second segment of the bitstream, the decoding of the second segment starting from a second position in the bitstream and being performed in parallel with the decoding of the first segment, and the second segment preceding the first segment in the bitstream.
    Type: Application
    Filed: May 3, 2019
    Publication date: November 28, 2019
    Inventors: Vinodh Gopal, James D. Guilford, Sudhir K. Satpathy, Sanu K. Mathew
  • Publication number: 20190332378
    Abstract: A processor includes an instruction decoder to receive a first instruction to process a secure hash algorithm 2 (SHA-2) hash algorithm, the first instruction having a first operand associated with a first storage location to store a SHA-2 state and a second operand associated with a second storage location to store a plurality of messages and round constants. The processor further includes an execution unit coupled to the instruction decoder to perform one or more iterations of the SHA-2 hash algorithm on the SHA-2 state specified by the first operand and the plurality of messages and round constants specified by the second operand, in response to the first instruction.
    Type: Application
    Filed: June 24, 2019
    Publication date: October 31, 2019
    Inventors: Kirk S. YAP, Gilbert M. WOLRICH, James D. GUILFORD, Vinodh GOPAL, Erdinc OZTURK, Sean M. GULLEY, Wajdi K. FEGHALI, Martin G. DIXON
  • Patent number: 10462110
    Abstract: In one embodiment, an apparatus includes: a device having a physically unclonable function (PUF) circuit including a plurality of PUF cells to generate a PUF sample responsive to at least one control signal; a controller coupled to the device, the controller to send the at least one control signal to the PUF circuit and to receive a plurality of PUF samples from the PUF circuit; a buffer having a plurality of entries each to store at least one of the plurality of PUF samples; and a filter to filter the plurality of PUF samples to output a filtered value, wherein the controller is to generate a unique identifier for the device based at least in part on the filtered value. Other embodiments are described and claimed.
    Type: Grant
    Filed: February 16, 2017
    Date of Patent: October 29, 2019
    Assignee: Intel Corporation
    Inventors: Simon N. Peffers, Sean M. Gulley, Vinodh Gopal, Sanu K. Mathew
  • Patent number: 10445261
    Abstract: An apparatus is described. The apparatus includes a main memory controller having a point-to-point link interface to couple to a point-to-point link. The point-to-point link is to transport system memory traffic between said main memory controller and a main memory. The main memory controller includes at least one of compression logic circuitry to compress write information prior to being transmitted over the link; decompression logic circuitry to decompress read information after being received from the link.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: October 15, 2019
    Assignee: Intel Corporation
    Inventors: Kirk S. Yap, Daniel F. Cutter, Vinodh Gopal
  • Publication number: 20190310848
    Abstract: An apparatus and method are described for performing efficient Boolean operations in a pipelined processor which, in one embodiment, does not natively support three operand instructions. For example, in one embodiment, a processor comprises: a set of registers for storing packed operands; Boolean operation logic to execute a single instruction which uses three or more source operands packed in the set of registers, the Boolean operation logic to read at least three source operands and an immediate value to perform a Boolean operation on the three source operands, wherein the Boolean operation comprises: combining a bit read from each of the three operands to form an index to the immediate value, the index identifying a bit position within the immediate value; reading the bit from the identified bit position of the immediate value; and storing the bit from the identified bit position of the immediate value in a destination register.
    Type: Application
    Filed: June 25, 2019
    Publication date: October 10, 2019
    Inventors: Vinodh GOPAL, Wajdi FEGHALI, Gilbert WOLRICH, Kirk YAP
  • Patent number: 10437739
    Abstract: Methods, apparatus and associated techniques and mechanisms for reducing latency in accelerators. The techniques and mechanisms are implemented in platform architectures supporting shared virtual memory (SVM) and includes use of SVM-enabled accelerators, along with translation look-aside buffers (TLBs). A request descriptor defining a job to be performed by an accelerator and referencing virtual addresses (VAs) and sizes of one or more buffers is enqueued via execution of a thread on a processor core. Under one approach, the descriptor includes hints comprising physical addresses or virtual address to physical address (VA-PA) translations that are obtained from one or more TLBs associated with the core using the buffer VAs. Under another approach employing TLB snooping, the buffer VAs are used as lookups and matching TLB entries ((VA-PA) translations) are used as hints. The hints are used to speculatively pre-fetch buffer data and speculatively start processing the pre-fetched buffer data on the accelerator.
    Type: Grant
    Filed: September 26, 2017
    Date of Patent: October 8, 2019
    Assignee: Intel Corporation
    Inventor: Vinodh Gopal
  • Publication number: 20190305797
    Abstract: In one embodiment, an apparatus includes: a compression circuit to compress data blocks of one or more traffic classes; and a control circuit coupled to the compression circuit, where the control circuit is to enable the compression circuit to concurrently compress data blocks of a first traffic class and not to compress data blocks of a second traffic class. Other embodiments are described and claimed.
    Type: Application
    Filed: June 19, 2019
    Publication date: October 3, 2019
    Inventors: Simon N. Peffers, Vinodh Gopal, Kirk Yap
  • Patent number: 10432393
    Abstract: A flexible aes instruction for a general purpose processor is provided that performs aes encryption or decryption using n rounds, where n includes the standard aes set of rounds {10, 12, 14}. A parameter is provided to allow the type of aes round to be selected, that is, whether it is a “last round”. In addition to standard aes, the flexible aes instruction allows an AES-like cipher with 20 rounds to be specified or a “one round” pass.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: October 1, 2019
    Assignee: INTEL CORPORATION
    Inventors: Shay Gueron, Wajdi K. Feghali, Vinodh Gopal
  • Patent number: 10416900
    Abstract: Technologies for addressing data in a memory include an apparatus that includes a memory and a controller. The memory is to store sub-blocks of data in a data table and a pointer table of locations of the sub-blocks in the data table. The controller is to manage the storage and lookup of data in the memory. Further, the controller is to store a sub-block pointer in the pointer table to a location of a sub-block in the data table and store a second pointer that references an entry where the sub-block pointer is stored in the pointer table.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: September 17, 2019
    Assignee: Intel Corporation
    Inventors: Jawad B. Khan, Vinodh Gopal, Sanjeev N. Trika
  • Patent number: 10404836
    Abstract: In an embodiment, a processor comprises a plurality of processing cores and a compression accelerator to compress an input stream comprising a first data block and a second data block. The compression accelerator comprises a first compression engine to compress the first data block; and a second compression engine to update state data for the second compression engine using a sub-portion of the first data block; and after an update of the state data for the second compression engine using the sub-portion of the first data block, compress a second data block using the updated state data for the second compression engine. Other embodiments are described and claimed.
    Type: Grant
    Filed: December 26, 2016
    Date of Patent: September 3, 2019
    Assignee: Intel Corporation
    Inventors: James D. Guilford, Vinodh Gopal, Daniel F. Cutter
  • Publication number: 20190268017
    Abstract: Methods, apparatus, systems, and software for implementing self-checking compression. A byte stream is encoded to generate tokens and selected tokens are encoded with hidden parity information in a compressed byte stream that may be stored for later streaming or streamed to a receiver. As the compressed byte stream is received, it is decompressed, with the hidden parity information being decoded and used to detect for errors in the decompressed data, enabling errors to be detected on-the-fly rather than waiting to perform a checksum over an entire received file. In one embodiment the byte stream is encoded using a Lempel-Ziv 77 (LZ77)-based encoding process to generate a sequence of tokens including literals and references, with all or selected references encoded with hidden parity information in a compressed byte stream having a standard format such as DEFLATE or Zstandard.
    Type: Application
    Filed: May 8, 2019
    Publication date: August 29, 2019
    Inventor: Vinodh Gopal
  • Patent number: 10379854
    Abstract: Processor instructions for determining two minimum and two maximum values and associated apparatus and methods. The instructions include various 2MIN instructions for determining the two smallest values among three or four input values and 2MAX instructions for determining the two largest values among three or four input values. The 2MIN instructions employ two operands, with the first operand in some of the variations storing concatenated min1 and min2 values in a first register and a scr2 comparison value or two src2 concatenated src2 values in a second register. Comparators are used to implement hardware logic for determining whether the scr2 value(s) is/are less than each of min1 and min2. Based on the hardware logic, the smallest two values among min1, min2, and src2 (or both src2 values) are stored as concatenated values in the first register. The 2MAX instructions are implemented in a similar manner, except the comparisons are whether the scr2 value(s) is/are greater than each of max1 and max2 values.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: August 13, 2019
    Assignee: Intel Corporation
    Inventors: Vinodh Gopal, James D. Guilford
  • Patent number: 10379853
    Abstract: A processor is described having an instruction execution pipeline having a functional unit to execute an instruction that compares vector elements against an input value. Each of the vector elements and the input value have a first respective section identifying a location within data and a second respective section having a byte sequence of the data. The functional unit has comparison circuitry to compare respective byte sequences of the input vector elements against the input value's byte sequence to identify a number of matching bytes for each comparison. The functional unit also has difference circuitry to determine respective distances between the input vector's elements' byte sequences and the input value's byte sequence within the data.
    Type: Grant
    Filed: November 8, 2016
    Date of Patent: August 13, 2019
    Assignee: Intel Corporation
    Inventors: Vinodh Gopal, James D. Guilford, Gilbert M. Wolrich
  • Publication number: 20190245679
    Abstract: Modifications to Advanced Encryption Standard (AES) hardware acceleration circuitry are described to allow hardware acceleration of the key operations of any non-AES block cipher, such as SMT and Camellia. In some embodiments the GF(28) inverse computation circuit in the AES S-box is used to compute X?1 (where X is the input plaintext or ciphertext byte), and hardware support is added to compute parallel GF(28) matrix multiplications. The embodiments described herein have minimal hardware overhead while achieving greater speed than software implementations.
    Type: Application
    Filed: February 2, 2018
    Publication date: August 8, 2019
    Inventors: Vikram B. Suresh, Sanu K. Matthew, Sudhir K. Satpathy, Vinodh Gopal