Patents by Inventor Daniel F. Cutter

Daniel F. Cutter has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20170147255
    Abstract: A processing system is provided that includes a memory for storing an input bit stream and a processing logic, operatively coupled to the memory, to generate a first score based on: a first set of matching data related to a match between a first bit subsequence and a candidate bit subsequence within the input bit stream, and a first distance of the candidate bit subsequence from the first set of matching data. A second score is generated based on a second set of matching data related to a match between a second bit subsequence and the candidate bit subsequence, and a second distance of the candidate bit subsequence from the second set of matching data. A code to replace the first or second bit subsequence in an output bit stream is identified. Selection of the one of the bit subsequences to replace is based on a comparison of the scores.
    Type: Application
    Filed: February 8, 2017
    Publication date: May 25, 2017
    Inventors: James D. Guilford, Vinodh Gopal, Gilbert M. Wolrich, Daniel F. Cutter
  • Publication number: 20170126248
    Abstract: Example data compression methods disclosed herein include determining a first hash chain index corresponding to a first position in an input data buffer based on a first group of bytes accessed from the input data buffer beginning at a first look-ahead offset from the first position. If a first hash chain (indexed by the first hash chain index), does not satisfy a quality condition, a second hash chain index corresponding to the first position in the input data buffer based on a second group of bytes accessed from the input data buffer beginning at a second look-ahead offset from the first position is determined. The input data buffer is searched at respective adjusted buffer positions to find a second string of data bytes matching a first string of data bytes and information related to the second string of data bytes is provided to an encoder to output compressed data.
    Type: Application
    Filed: January 13, 2017
    Publication date: May 4, 2017
    Inventors: Vinodh Gopal, James D. Guilford, Gilbert M. Wolrich, Daniel F. Cutter
  • Publication number: 20170109056
    Abstract: In an embodiment, a processor includes hardware processing cores, a cache memory, and a compression accelerator comprising a hash table memory. The compression accelerator is to: determine a hash value for input data to be compressed; read a first plurality of N location values stored in a hash table entry indexed by the hash value; perform a first set of string searches in parallel from a history buffer using the first plurality of N location values stored in the hash table entry; read a second plurality of N location values stored in a first overflow table entry indexed by a first overflow pointer included in the hash table entry; and perform a second set of string searches in parallel from the history buffer using the second plurality of N location values stored in the first overflow table entry. Other embodiments are described and claimed.
    Type: Application
    Filed: October 19, 2015
    Publication date: April 20, 2017
    Inventors: Vinodh Gopal, James D. Guilford, Gilbert M. Wolrich, Daniel F. Cutter
  • Publication number: 20170111059
    Abstract: A processing device includes an accelerator circuit to identify a byte in a byte stream, determine whether a first byte string starting from a first byte position of the byte matches a second byte string starting from a second byte position, responsive to determining that the first byte string matches the second byte string, generate a token comprising a first symbol encoding a length of the first byte string and a second symbol encoding a byte distance between the first byte position and the second byte position, and responsive to determining that the first byte string does not match another byte string, generate the token comprising the first symbol comprising the byte and a second symbol encoding a determined value.
    Type: Application
    Filed: December 29, 2016
    Publication date: April 20, 2017
    Inventors: James D. Guilford, Vinodh Gopal, Gilbert M. Wolrich, Daniel F. Cutter
  • Publication number: 20170093423
    Abstract: Detailed herein are embodiments of systems, methods, and apparatuses for compression using hardware and software. Embodiments include compressor hardware to operate on two streams with one of the streams being an offset of the other stream. Additionally, in some embodiments, the output of the compressor hardware is submitted to software for further processing.
    Type: Application
    Filed: October 18, 2016
    Publication date: March 30, 2017
    Inventors: Vinodh Gopal, James D. Guilford, Gilbert M. Wolrich, Daniel F. Cutter
  • Publication number: 20170083453
    Abstract: A processing system is provided that includes a memory for storing an input bit stream and a processing logic, operatively coupled to the memory, to generate a first score based on: a first set of matching data related to a match between a first bit subsequence and a candidate bit subsequence within the input bit stream, and a first distance of the candidate bit subsequence from the first set of matching data. A second score is generated based on a second set of matching data related to a match between a second bit subsequence and the candidate bit subsequence, and a second distance of the candidate bit subsequence from the second set of matching data. A code to replace the first or second bit subsequence in an output bit stream is identified. Selection of the one of the bit subsequences to replace is based on a comparison of the scores.
    Type: Application
    Filed: August 5, 2016
    Publication date: March 23, 2017
    Inventors: James D. Guilford, Vinodh Gopal, Gilbert M. Wolrich, Daniel F. Cutter
  • Patent number: 9594695
    Abstract: A processing system is provided that includes a memory for storing an input bit stream and a processing logic, operatively coupled to the memory, to generate a first score based on: a first set of matching data related to a match between a first bit subsequence and a candidate bit subsequence within the input bit stream, and a first distance of the candidate bit subsequence from the first set of matching data. A second score is generated based on a second set of matching data related to a match between a second bit subsequence and the candidate bit subsequence, and a second distance of the candidate bit subsequence from the second set of matching data. A code to replace the first or second bit subsequence in an output bit stream is identified. Selection of the one of the bit subsequences to replace is based on a comparison of the scores.
    Type: Grant
    Filed: August 5, 2016
    Date of Patent: March 14, 2017
    Assignee: Intel Corporation
    Inventors: James D. Guilford, Vinodh Gopal, Gilbert M. Wolrich, Daniel F. Cutter
  • Patent number: 9584155
    Abstract: Example data compression methods disclosed herein include determining a hash chain index corresponding to a first position in an input data buffer based on a group of bytes beginning at a look-ahead offset from the first position. Such disclosed example methods also include, when a hash chain, which is indexed by the hash chain index, satisfies a quality condition, searching the input data buffer at respective adjusted buffer positions corresponding to a set of buffer positions stored in the hash chain being offset by the look-ahead offset to find a second data string matching a first data string beginning at the first position in the input data buffer. Such disclosed example methods further include, when the second data string satisfies a length condition, providing a relative position and a length of the second data string to an encoder to output compressed data corresponding to the input data buffer.
    Type: Grant
    Filed: September 24, 2015
    Date of Patent: February 28, 2017
    Assignee: Intel Corporation
    Inventors: Vinodh Gopal, James D. Guilford, Gilbert M. Wolrich, Daniel F. Cutter
  • Patent number: 9563579
    Abstract: In an embodiment, a shared memory fabric is configured to receive memory requests from multiple agents, where at least some of the requests have an associated order identifier and a deadline value to indicate a maximum latency prior to completion of the memory request. Responsive to the requests, the fabric is to arbitrate between the requests based at least in part on the deadline values. Other embodiments are described and claimed.
    Type: Grant
    Filed: February 28, 2013
    Date of Patent: February 7, 2017
    Assignee: Intel Corporation
    Inventors: Daniel F. Cutter, Blaise Fanning, Ramadass Nagarajan, Ravishankar Iyer, Quang T. Le, Ravi Kolagotla, Ioannis T. Schoinas, Jose S. Niell
  • Patent number: 9537504
    Abstract: A processing device includes a storage device to store data and a processor, operably coupled to the storage device, the processor to receive a token stream comprising a plurality of tokens generated based on a byte stream comprising a plurality of bytes, wherein each token in the token stream comprises at least one symbol associated with a respective byte in the byte stream, and wherein the at least one symbol represents one of the respective byte, a length of a first byte string starting from the respective byte, or a byte distance between the first byte string and a matching second byte string, generate a graph comprising a plurality of nodes and edges based on the token stream, wherein each token in the token stream is associated with a respective node connected by at least one edge to another node, and wherein the at least one edge is associated with a cost function to encode the at least one symbol stored in the each token, identify, based on the graph, a path between a first node associated with a begi
    Type: Grant
    Filed: September 25, 2015
    Date of Patent: January 3, 2017
    Assignee: Intel Corporation
    Inventors: James D. Guilford, Vinodh Gopal, Gilbert M. Wolrich, Daniel F. Cutter
  • Patent number: 9535860
    Abstract: In an embodiment, a shared memory fabric is configured to receive memory requests from multiple agents, where at least some of the requests have an associated deadline value to indicate a maximum latency prior to completion of the memory request. Responsive to the requests, the fabric is to arbitrate between the requests based at least in part on the deadline values. Other embodiments are described and claimed.
    Type: Grant
    Filed: January 17, 2013
    Date of Patent: January 3, 2017
    Assignee: Intel Corporation
    Inventors: Daniel F. Cutter, Blaise Fanning, Ramadass Nagarajan, Jose S. Niell, Debra Bernstein, Deepak Limaye, Ioannis T. Schoinas, Ravishankar Iyer
  • Publication number: 20160378701
    Abstract: An apparatus having a fabric interconnect that supports multiple topologies and method for using the same are disclosed. In one embodiment, the apparatus comprises mode memory to store information indicative of one of the plurality of modes; and a first fabric operable in a plurality of modes, where the fabric comprises logic coupled to the mode memory to control processing of read and write requests to memory received by the first fabric according to the mode identified by the information indicative.
    Type: Application
    Filed: June 26, 2015
    Publication date: December 29, 2016
    Inventors: Jose S. Niell, Daniel F. Cutter, Stephen J. Robinson, Mukesh K. Patel
  • Patent number: 9473168
    Abstract: Detailed herein are embodiments of systems, methods, and apparatuses for compression using hardware and software. Embodiments include compressor hardware to operate on two streams with one of the streams being an offset of the other stream. Additionally, in some embodiments, the output of the compressor hardware is submitted to software for further processing.
    Type: Grant
    Filed: September 25, 2015
    Date of Patent: October 18, 2016
    Assignee: Intel Corporation
    Inventors: Vinodh Gopal, James D. Guilford, Gilbert M. Wolrich, Daniel F. Cutter
  • Publication number: 20160283504
    Abstract: A processor includes a memory hierarchy, buffer, and a compression module. The compression module includes logic to evaluate a stream of data to be compressed according to a compression scheme, selectively modify a format of the compression scheme based upon a number of literals received, compress a sequence of the data to produce the output data sequence, and send the output data sequence to the memory hierarchy.
    Type: Application
    Filed: March 27, 2015
    Publication date: September 29, 2016
    Inventors: James D. Guilford, Vinodh Gopal, Gilbert M. Wolrich, Daniel F. Cutter
  • Patent number: 9419648
    Abstract: In one embodiment, a processing system is provided. The processing system includes a memory for storing an input bit stream and a processing logic coupled to the memory. The processing logic to identify, within the input bit stream, a first bit subsequence of an input bit sequence and a second bit subsequence of the input bit sequence. A first score reflecting the length of the first bit subsequence and the distance between the input bit sequence and the first bit subsequence and a second score reflecting the length of the second bit subsequence, within the input bit stream, and the distance between the input bit sequence and the second bit subsequence is determined. In view of the first score and the second score, one of the first bit subsequence or the second bit subsequence is selected. A code representing a selected bit subsequence is appended to an output bit sequence.
    Type: Grant
    Filed: September 18, 2015
    Date of Patent: August 16, 2016
    Assignee: Intel Corporation
    Inventors: James D. Guilford, Vinodh Gopal, Gilbert M. Wolrich, Daniel F. Cutter
  • Patent number: 9419647
    Abstract: In an embodiment, a processor includes a compression accelerator coupled to a plurality of hardware processing cores. The compression accelerator is to: receive input data to be compressed; select a particular intermediate format of a plurality of intermediate formats based on a type of compression software to be executed by at least one of the plurality of hardware processing cores; perform a duplicate string elimination operation on the input data to generate a partially compressed output in the particular intermediate format; and provide the partially compressed output in the particular intermediate format to the compression software, wherein the compression software is to perform an encoding operation on the partially compressed output to generate a final compressed output. Other embodiments are described and claimed.
    Type: Grant
    Filed: December 16, 2014
    Date of Patent: August 16, 2016
    Assignee: Intel Corporation
    Inventors: Vinodh Gopal, James D. Guilford, Gilbert M. Wolrich, Daniel F. Cutter
  • Publication number: 20160173123
    Abstract: In an embodiment, a processor includes a compression accelerator coupled to a plurality of hardware processing cores. The compression accelerator is to: receive input data to be compressed; select a particular intermediate format of a plurality of intermediate formats based on a type of compression software to be executed by at least one of the plurality of hardware processing cores; perform a duplicate string elimination operation on the input data to generate a partially compressed output in the particular intermediate format; and provide the partially compressed output in the particular intermediate format to the compression software, wherein the compression software is to perform an encoding operation on the partially compressed output to generate a final compressed output. Other embodiments are described and claimed.
    Type: Application
    Filed: December 16, 2014
    Publication date: June 16, 2016
    Inventors: VINODH GOPAL, JAMES D. GUILFORD, GILBERT M. WOLRICH, DANIEL F. CUTTER
  • Publication number: 20140281197
    Abstract: In one embodiment, a conflict detection logic is configured to receive a plurality of memory requests from an arbiter of a coherent fabric of a system on a chip (SoC). The conflict detection logic includes snoop filter logic to downgrade a first snooped memory request for a first address to an unsnooped memory request when an indicator associated with the first address indicates that the coherent fabric has control of the first address. Other embodiments are described and claimed.
    Type: Application
    Filed: March 15, 2013
    Publication date: September 18, 2014
    Inventors: Jose S. Niell, Daniel F. Cutter, James D. Allen, Deepak Limaye, Shadi T. Khasawneh
  • Publication number: 20140240326
    Abstract: In an embodiment, a shared memory fabric is configured to receive memory requests from multiple agents, where at least some of the requests have an associated order identifier and a deadline value to indicate a maximum latency prior to completion of the memory request. Responsive to the requests, the fabric is to arbitrate between the requests based at least in part on the deadline values. Other embodiments are described and claimed.
    Type: Application
    Filed: February 28, 2013
    Publication date: August 28, 2014
    Inventors: Daniel F. Cutter, Blaise Fanning, Ramadass Nagarajan, Ravishankar Iyer, Quang T. Le, Ravi Kolagotla, Ioannis T. Schoinas, Jose S. Niell
  • Publication number: 20140201471
    Abstract: In an embodiment, a shared memory fabric is configured to receive memory requests from multiple agents, where at least some of the requests have an associated deadline value to indicate a maximum latency prior to completion of the memory request. Responsive to the requests, the fabric is to arbitrate between the requests based at least in part on the deadline values. Other embodiments are described and claimed.
    Type: Application
    Filed: January 17, 2013
    Publication date: July 17, 2014
    Inventors: Daniel F. Cutter, Blaise Fanning, Ramadass Nagarajan, Jose S. Niell, Debra Bernstein, Deepak Limaye, Ioannis T. Schoinas, Ravishankar Iyer