Patents by Inventor James D. Guilford

James D. Guilford has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180004744
    Abstract: In an example, there is disclosed an apparatus, comprising: a data store comprising a hash table having for at least some rows a hash entry indexed by a hash value, and comprising a hash chain of one or more pointers to a history buffer, and a spill counter; and one or more logic elements, including at least one hardware logic element, comprising a data compressor to: inspect a string0 comprising n bytes at position p in a data file; get the spill counter from a hash entry corresponding to string0; inspect a string1 comprising n bytes at p+k, wherein k is a positive integer; get the spill counter from a hash entry corresponding to string1; determine that the spill counter for string1 is less than the spill counter for string0; and search a chain1 (the hash chain of a hash entry corresponding to string1) for a matching string of size at least n+k with an offset of ?k.
    Type: Application
    Filed: July 1, 2016
    Publication date: January 4, 2018
    Applicant: Intel Corporation
    Inventors: James D. Guilford, Vinodh Gopal, Daniel Cutter
  • Patent number: 9859918
    Abstract: Technologies for performing speculative decompression include a managed node to decode a variable size code at a present position in compressed data with a deterministic decoder and concurrently perform speculative decodes over a range of subsequent positions in the compressed data, determine the position of the next code, determine whether the position of the next code is within the range, and output, in response to a determination that the position of the next code is within the range, a symbol associated with the deterministically decoded code and another symbol associated with a speculatively decoded code at the position of the next code.
    Type: Grant
    Filed: March 30, 2017
    Date of Patent: January 2, 2018
    Assignee: Intel Corporation
    Inventors: Vinodh Gopal, James D. Guilford, Kirk S. Yap
  • Patent number: 9853660
    Abstract: Techniques and apparatus for parallel data compression are described. An apparatus to provide parallel data compression may include at least one memory and logic for a compression component, at least a portion of the logic comprised in hardware coupled to the at least one memory, the logic to provide at least one data input sequence to a plurality of compression components, determine compression information for the plurality of compression components, and perform a compression process on the at least one data input sequence via the plurality of compression components to generate at least one data output sequence, the plurality of compression components to perform the compression process in parallel based on the compression information.
    Type: Grant
    Filed: March 23, 2017
    Date of Patent: December 26, 2017
    Assignee: INTEL CORPORATION
    Inventors: Vinodh Gopal, Kirk S. Yap, Daniel F. Cutter, James D. Guilford, Wajdi K. Feghali
  • Publication number: 20170351519
    Abstract: Method and apparatus for performing a shift and XOR operation. In one embodiment, an apparatus includes execution resources to execute a first instruction. In response to the first instruction, said execution resources perform a shift and XOR on at least one value.
    Type: Application
    Filed: August 25, 2017
    Publication date: December 7, 2017
    Applicant: lntel Corporation
    Inventors: Vinodh Gopal, James D. Guilford, Erdinc Ozturk, Wajdi K. Feghali, Gilbert M. Wolrich, Martin G. Dixon
  • Patent number: 9825647
    Abstract: In one embodiment, an apparatus comprises a decompression engine to perform a non-speculative decode operation on a first portion of a first compressed payload comprising a first plurality of codes; and perform a speculative decode operation on a second portion of the first compressed payload, wherein the non-speculative decode operation and the speculative decode operation share at least one decode path and the non-speculative decode operation is to utilize bandwidth of the at least one decode path that is not used by the non-speculative decode operation.
    Type: Grant
    Filed: September 28, 2016
    Date of Patent: November 21, 2017
    Assignee: Intel Corporation
    Inventors: Sudhir K. Satpathy, James D. Guilford, Vikram B. Suresh, Sanu K. Mathew, Vinodh Gopal
  • Patent number: 9825648
    Abstract: In one embodiment, an apparatus comprises a first compression engine to receive a first compressed data block from a second compression engine that is to generate the first compressed data block by compressing a first plurality of repeated instances of data that each have a length greater than or equal to a first length. The first compression engine is further to compress a second plurality of repeated instances of data of the first compressed data block that each have a length greater than or equal to a second length, the second length being shorter than the first length, wherein each compressed repeated instance of the first and second pluralities of repeated instances comprises a location and length of a data instance that is repeated. The apparatus further comprises a memory buffer to store the compressed first and second plurality of repeated instances of data.
    Type: Grant
    Filed: September 27, 2016
    Date of Patent: November 21, 2017
    Assignee: Intel Corporation
    Inventors: Vinodh Gopal, James D. Guilford, Daniel F. Cutter
  • Patent number: 9787332
    Abstract: A compression engine may be designed for more efficient error checking of a compressed stream, to include adaptation of a heterogeneous design that includes interleaved hardware and software stages of compression and decompression. An output of a string matcher may be reversed to generate a bit stream, which is then compared with an input stream to the compression engine as a first error check. A final compressed output of the compression engine may be partially decompressed to reverse entropy code encoding of an entropy code encoder. The partially decompressed output may be compared with an output of an entropy code generator to perform a second error check. Finding an error at the first error check greatly reduces the latency of generating a fault or exception, as does performing computing-intensive aspects of the compression and decompression with software instead of specialized hardware.
    Type: Grant
    Filed: September 15, 2015
    Date of Patent: October 10, 2017
    Assignee: Intel Corporation
    Inventors: James D. Guilford, Vinodh Gopal, Laurent Coquerel
  • Publication number: 20170285960
    Abstract: Methods and apparatuses relating to memory compression and decompression are described. In one embodiment, a hardware compression engine is to determine when each section of a plurality of sections of a block of data is a zero value, a full match or a partial match to an entry in a dictionary, or a no match to any entry in the dictionary, encode a tag for each section to indicate the one of the zero value, the full match, the partial match, and the no match, encode a literal when the section is the no match, an index to the entry in the dictionary when the section is the full match, and an index to the entry in the dictionary and non-matching bits when the section is the partial match, and update an entry in the dictionary with a value of a section when the section is the no match.
    Type: Application
    Filed: April 1, 2016
    Publication date: October 5, 2017
    Inventors: Kirk S. Yap, Vinodh Gopal, James D. Guilford, Sean M. Gulley
  • Patent number: 9772845
    Abstract: A processor includes a plurality of registers, an instruction decoder to receive an instruction to process a KECCAK state cube of data representing a KECCAK state of a KECCAK hash algorithm, to partition the KECCAK state cube into a plurality of subcubes, and to store the subcubes in the plurality of registers, respectively, and an execution unit coupled to the instruction decoder to perform the KECCAK hash algorithm on the plurality of subcubes respectively stored in the plurality of registers in a vector manner.
    Type: Grant
    Filed: December 13, 2011
    Date of Patent: September 26, 2017
    Assignee: Intel Corporation
    Inventors: Kirk S. Yap, Gilbert M. Wolrich, James D. Guilford, Vinodh Gopal, Erdinc Ozturk, Sean M. Gulley, Wajdi K. Feghali, Martin G. Dixon
  • Publication number: 20170272096
    Abstract: Detailed herein are embodiments of systems, methods, and apparatuses for decompression using hardware and software. For example, in embodiment a hardware apparatus comprises an input buffer to store incoming data from a compressed stream, a selector to select at least one byte stored in the input buffer, a decoder to decode the selected at least one byte and determine if the decoded at least one byte is a literal or a symbol, an overlap condition, a size of a record from the decoded stream, a length value of the data to be retrieved from the decoded stream, and an offset value for the decoded data, and a token format converter to convert the decoded data and data from source and destination offset base registers into a fixed-length token.
    Type: Application
    Filed: April 4, 2017
    Publication date: September 21, 2017
    Inventors: VINODH GOPAL, JAMES D. GUILFORD, KIRK S. YAP, SEAN M. GULLEY, GILBERT M. WOLRICH
  • Patent number: 9768802
    Abstract: Example data compression methods disclosed herein include determining a first hash chain index corresponding to a first position in an input data buffer based on a first group of bytes accessed from the input data buffer beginning at a first look-ahead offset from the first position. If a first hash chain (indexed by the first hash chain index), does not satisfy a quality condition, a second hash chain index corresponding to the first position in the input data buffer based on a second group of bytes accessed from the input data buffer beginning at a second look-ahead offset from the first position is determined. The input data buffer is searched at respective adjusted buffer positions to find a second string of data bytes matching a first string of data bytes and information related to the second string of data bytes is provided to an encoder to output compressed data.
    Type: Grant
    Filed: January 13, 2017
    Date of Patent: September 19, 2017
    Assignee: Intel Corporation
    Inventors: Vinodh Gopal, James D. Guilford, Gilbert M. Wolrich, Daniel F. Cutter
  • Patent number: 9753666
    Abstract: Compression and decompression technology within a solid-state device (SSD) is disclosed that provides a good compression ratio while taking up less on-chip area. An input interface receives an input stream to be compressed. An output interface provides a compressed stream. A history buffer is of a fixed size that is a fraction of a size of a data buffer. Processing logic encodes into the compressed stream element types, literals and pointers, the latter which reference copies of data found elsewhere within the history buffer during compression. The history buffer may be multiple banks in width, where the data is loaded from the input stream sequentially across rows of the banks. The decompression side may be similarly designed, optionally with a different number of banks. The pointers may be a fixed two bytes including four bits for length and eleven bits for offset of back reference to a copy (or other combination).
    Type: Grant
    Filed: March 27, 2015
    Date of Patent: September 5, 2017
    Assignee: Intel Corporation
    Inventors: Vinodh Gopal, Kirk S. Yap, James D. Guilford, Jawad B. Khan
  • Publication number: 20170250706
    Abstract: Technologies for data decompression include a computing device that reads a symbol tag byte from an input stream. The computing device determines whether the symbol can be decoded using a fast-path routine, and if not, executes a slow-path routine to decompress the symbol. The slow-path routine may include data-dependent branch instructions that may be unpredictable using branch prediction hardware. For the fast-path routine, the computing device determines a next symbol increment value, a literal increment value, a data length, and an offset based on the tag byte, without executing an unpredictable branch instruction. The computing device sets a source pointer to either literal data or reference data as a function of the tag byte, without executing an unpredictable branch instruction. The computing device may set the source pointer using a conditional move instruction. The computing device copies the data and processes remaining symbols. Other embodiments are described and claimed.
    Type: Application
    Filed: December 9, 2016
    Publication date: August 31, 2017
    Inventors: Vinodh Gopal, Sean M. Gulley, James D. Guilford
  • Patent number: 9747105
    Abstract: Method and apparatus for performing a shift and XOR operation. In one embodiment, an apparatus includes execution resources to execute a first instruction. In response to the first instruction, said execution resources perform a shift and XOR on at least one value.
    Type: Grant
    Filed: December 17, 2009
    Date of Patent: August 29, 2017
    Assignee: Intel Corporation
    Inventors: Vinodh Gopal, James D. Guilford, Erdine Ozturk, Wajdi K. Feghali, Gilbert M. Wolrich, Martin G. Dixon
  • Patent number: 9740484
    Abstract: An apparatus and method are described for processing bit streams using bit-oriented instructions. For example, a method according to one embodiment includes the operations of: executing an instruction to get bits for an operation, the instruction identifying a start bit address and a number of bits to be retrieved; retrieving the bits identified by the start bit address and number of bits from a bit-oriented register or cache; and performing a sequence of specified bit operations on the retrieved bits to generate results.
    Type: Grant
    Filed: December 22, 2011
    Date of Patent: August 22, 2017
    Assignee: INTEL CORPORATION
    Inventors: Vinodh Gopal, James D. Guilford, Gilbert M. Wolrich, Erdinc Ozturk, Wajdi K. Feghali, Kirk S. Yap, Sean M. Gulley, Martin G. Dixon, Robert S. Chappell
  • Publication number: 20170235576
    Abstract: An apparatus and method for performing parallel decoding of prefix codes such as Huffman codes. For example, one embodiment of an apparatus comprises: a first decompression module to perform a non-speculative decompression of a first portion of a prefix code payload comprising a first plurality of symbols; and a second decompression module to perform speculative decompression of a second portion of the prefix code payload comprising a second plurality of symbols concurrently with the non-speculative decompression performed by the first compression module.
    Type: Application
    Filed: December 6, 2016
    Publication date: August 17, 2017
    Inventors: SUDHIR K. SATPATHY, SANU K. MATHEW, VINODH GOPAL, JAMES D. GUILFORD
  • Publication number: 20170235571
    Abstract: According to one embodiment, a processor includes an instruction decoder to receive an instruction to process a multiply-accumulate operation, the instruction having a first operand, a second operand, a third operand, and a fourth operand. The first operand is to specify a first storage location to store an accumulated value; the second operand is to specify a second storage location to store a first value and a second value; and the third operand is to specify a third storage location to store a third value. The processor further includes an execution unit coupled to the instruction decoder to perform the multiply-accumulate operation to multiply the first value with the second value to generate a multiply result and to accumulate the multiply result and at least a portion of a third value to an accumulated value based on the fourth operand.
    Type: Application
    Filed: January 3, 2017
    Publication date: August 17, 2017
    Inventors: Vinodh Gopal, Erdinc Ozturk, James D. Guilford, Gilbert M. Wolrich
  • Patent number: 9733858
    Abstract: A processing system is provided that includes a memory for storing an input bit stream and a processing logic, operatively coupled to the memory, to generate a first score based on: a first set of matching data related to a match between a first bit subsequence and a candidate bit subsequence within the input bit stream, and a first distance of the candidate bit subsequence from the first set of matching data. A second score is generated based on a second set of matching data related to a match between a second bit subsequence and the candidate bit subsequence, and a second distance of the candidate bit subsequence from the second set of matching data. A code to replace the first or second bit subsequence in an output bit stream is identified. Selection of the one of the bit subsequences to replace is based on a comparison of the scores.
    Type: Grant
    Filed: February 8, 2017
    Date of Patent: August 15, 2017
    Assignee: Intel Corporation
    Inventors: James D. Guilford, Vinodh Gopal, Gilbert M. Wolrich, Daniel F. Cutter
  • Publication number: 20170187388
    Abstract: Methods and apparatuses relating to data decompression are described. In one embodiment, a hardware processor includes a core to execute a thread and offload a decompression thread for an encoded, compressed data stream comprising a literal code, a length code, and a distance code, and a hardware decompression accelerator to execute the decompression thread to selectively provide the encoded, compressed data stream to a first circuit to serially decode the literal code to a literal symbol, serially decode the length code to a length symbol, and serially decode the distance code to a distance symbol, and selectively provide the encoded, compressed data stream to a second circuit to look up the literal symbol for the literal code from a table, look up the length symbol for the length code from the table, and look up the distance symbol for the distance code from the table.
    Type: Application
    Filed: December 26, 2015
    Publication date: June 29, 2017
    Inventors: Sudhir K. Satpathy, James D. Guilford, Sanu K. Mathew, Vinodh Gopal, Vikram B. Suresh
  • Patent number: 9690488
    Abstract: In an embodiment, a processor includes hardware processing cores, a cache memory, and a compression accelerator comprising a hash table memory. The compression accelerator is to: determine a hash value for input data to be compressed; read a first plurality of N location values stored in a hash table entry indexed by the hash value; perform a first set of string searches in parallel from a history buffer using the first plurality of N location values stored in the hash table entry; read a second plurality of N location values stored in a first overflow table entry indexed by a first overflow pointer included in the hash table entry; and perform a second set of string searches in parallel from the history buffer using the second plurality of N location values stored in the first overflow table entry. Other embodiments are described and claimed.
    Type: Grant
    Filed: October 19, 2015
    Date of Patent: June 27, 2017
    Assignee: Intel Corporation
    Inventors: Vinodh Gopal, James D. Guilford, Gilbert M. Wolrich, Daniel F. Cutter