Patents by Inventor Kirk Yap

Kirk Yap has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11955995
    Abstract: A lossless data compressor of an aspect includes a first lossless data compressor circuitry coupled to receive input data. The first lossless data compressor circuitry is to apply a first lossless data compression approach to compress the input data to generate intermediate compressed data. The apparatus also includes a second lossless data compressor circuitry coupled with the first lossless data compressor circuitry to receive the intermediate compressed data. The second lossless data compressor circuitry is to apply a second lossless data compression approach to compress at least some of the intermediate compressed data to generate compressed data. The second lossless data compression approach different than the first lossless data compression approach. Lossless data decompressors are also disclosed, as are methods of lossless data compression and decompression.
    Type: Grant
    Filed: May 11, 2020
    Date of Patent: April 9, 2024
    Assignee: Intel Corporation
    Inventors: James Guilford, Vinodh Gopal, Daniel Cutter, Kirk Yap, Wajdi Feghali, George Powley
  • Publication number: 20230198548
    Abstract: Apparatus and method for detecting a constant data block are described herein. An apparatus embodiment includes compression circuitry to perform compression operations on a memory block; constant detection circuitry to, concurrently with performance of the compression operations on the memory block, determine that the memory block is a constant data block comprised of only repeat instances of a constant value; and controller circuitry to associate a first indication with the memory block based on the determination, the first indication usable for controlling whether to abort the compression operations or whether to discard a compressed memory block generated from the compression operations.
    Type: Application
    Filed: December 22, 2021
    Publication date: June 22, 2023
    Applicant: Intel Corporation
    Inventors: James David Guilford, Vinodh Gopal, Daniel Frederick Cutter, Kirk Yap
  • Patent number: 11663003
    Abstract: An apparatus and method are described for performing efficient Boolean operations in a pipelined processor which, in one embodiment, does not natively support three operand instructions. For example, in one embodiment, a processor comprises: a set of registers for storing packed operands; Boolean operation logic to execute a single instruction which uses three or more source operands packed in the set of registers, the Boolean operation logic to read at least three source operands and an immediate value to perform a Boolean operation on the three source operands, wherein the Boolean operation comprises: combining a bit read from each of the three operands to form an index to the immediate value, the index identifying a bit position within the immediate value; reading the bit from the identified bit position of the immediate value; and storing the bit from the identified bit position of the immediate value in a destination register.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: May 30, 2023
    Assignee: INTEL CORPORATION
    Inventors: Vinodh Gopal, Wajdi Feghali, Gilbert Wolrich, Kirk Yap
  • Patent number: 11516013
    Abstract: Disclosed embodiments relate to encrypting or decrypting confidential data with additional authentication data by an accelerator and a processor. In one example, a processor includes processor circuitry to compute a first hash of a first block of data stored in a memory, store the first hash in the memory, and generate an authentication tag based in part on a second hash. The processor further includes accelerator circuitry to obtain the first hash from the memory, decrypt a second block of data using the first hash, and compute the second hash based in part on the first hash and the second block of data.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: November 29, 2022
    Assignee: Intel Corporation
    Inventors: James Guilford, Vinodh Gopal, Kirk Yap
  • Publication number: 20220365885
    Abstract: Techniques are described for providing low-overhead cryptographic memory isolation to mitigate attack vulnerabilities in a multi-user virtualized computing environment. Memory read and memory write operations for target data, each operation initiated via an instruction associated with a particular virtual machine (VM), include the generation and/or validation of a message authentication code that is based at least on a VM-specific cryptographic key and a physical memory address of the target data. Such operations may further include transmitting the generated message authentication code via a plurality of ancillary bits incorporated within a data line that includes the target data. In the event of a validation failure, one or more error codes may be generated and provided to distinct trust domain architecture entities based on an operating mode of the associated virtual machine.
    Type: Application
    Filed: July 25, 2022
    Publication date: November 17, 2022
    Applicant: Intel Corporation
    Inventors: Siddhartha Chhabra, Rajat Agarwal, Baiju Patel, Kirk Yap
  • Patent number: 11397692
    Abstract: Techniques are described for providing low-overhead cryptographic memory isolation to mitigate attack vulnerabilities in a multi-user virtualized computing environment. Memory read and memory write operations for target data, each operation initiated via an instruction associated with a particular virtual machine (VM), include the generation and/or validation of a message authentication code that is based at least on a VM-specific cryptographic key and a physical memory address of the target data. Such operations may further include transmitting the generated message authentication code via a plurality of ancillary bits incorporated within a data line that includes the target data. In the event of a validation failure, one or more error codes may be generated and provided to distinct trust domain architecture entities based on an operating mode of the associated virtual machine.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: July 26, 2022
    Assignee: Intel Corporation
    Inventors: Siddhartha Chhabra, Rajat Agarwal, Baiju Patel, Kirk Yap
  • Publication number: 20220200623
    Abstract: Apparatus and method for efficient compression block decoding using content-addressable structure for header processing. For example, one embodiment of an apparatus comprises: a header parser to extract a sequence of tokens and corresponding length values from a header of a compression block, the tokens and corresponding length values associated with a type of compression used to compress a payload of the compression block; and a content-addressable data structure builder to construct a content-addressable data structure based on the tokens and length values, the content-addressable data structure builder to write an entry in the content-addressable data structure comprising a length value and a count value, the count value indicating a number of times the length value was previously written to an entry in the content-addressable data structure.
    Type: Application
    Filed: December 23, 2020
    Publication date: June 23, 2022
    Inventors: JAMES GUILFORD, VINODH GOPAL, DANIEL CUTTER, KIRK YAP
  • Publication number: 20220075738
    Abstract: The disclosed embodiments generally relate to methods, systems and apparatuses to authenticate instructions on a memory circuitry. In an exemplary embodiment, the disclosure relates to a computing device (e.g., a memory protection engine) to protect integrity of one or more memory circuitry.
    Type: Application
    Filed: September 15, 2021
    Publication date: March 10, 2022
    Applicant: Intel Corporation
    Inventors: Santosh Ghosh, Kirk Yap, Siddhartha Chhabra
  • Patent number: 11243836
    Abstract: A processing device comprising compression circuitry to: determine a compression configuration to compress source data; generate a checksum of the source data in an uncompressed state; compress the source data into at least one block based on the compression configuration, wherein the at least one block comprises: a plurality of sub-blocks, wherein the plurality of sub-block includes a predetermined size; a block header corresponding to the plurality of sub-blocks; and decompression circuitry coupled to the compression circuitry, wherein the decompression circuitry to: while not outputting a decompressed data stream of the source data: generate index information corresponding to the plurality of sub-blocks; in response to generating the index information, generate a checksum of the compressed source data associated with the plurality of sub-blocks; and determine whether the checksum of the source data in the uncompressed format matches the checksum of the compressed source data.
    Type: Grant
    Filed: June 22, 2020
    Date of Patent: February 8, 2022
    Assignee: Intel Corporation
    Inventors: Vinodh Gopal, James Guilford, Daniel Cutter, Kirk Yap
  • Publication number: 20210351790
    Abstract: A lossless data compressor of an aspect includes a first lossless data compressor circuitry coupled to receive input data. The first lossless data compressor circuitry is to apply a first lossless data compression approach to compress the input data to generate intermediate compressed data. The apparatus also includes a second lossless data compressor circuitry coupled with the first lossless data compressor circuitry to receive the intermediate compressed data. The second lossless data compressor circuitry is to apply a second lossless data compression approach to compress at least some of the intermediate compressed data to generate compressed data. The second lossless data compression approach different than the first lossless data compression approach. Lossless data decompressors are also disclosed, as are methods of lossless data compression and decompression.
    Type: Application
    Filed: May 11, 2020
    Publication date: November 11, 2021
    Inventors: James GUILFORD, Vinodh GOPAL, Dan CUTTER, Kirk YAP, Wajdi FEGHALI, George POWLEY
  • Patent number: 11169934
    Abstract: The disclosed embodiments generally relate to methods, systems and apparatuses to authenticate instructions on a memory circuitry. In an exemplary embodiment, the disclosure relates to a computing device (e.g., a memory protection engine) to protect integrity of one or more memory circuitry.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: November 9, 2021
    Assignee: INTEL CORPORATION
    Inventors: Santosh Ghosh, Kirk Yap, Siddhartha Chhabra
  • Patent number: 11108406
    Abstract: In one embodiment, an apparatus includes: a compression circuit to compress data blocks of one or more traffic classes; and a control circuit coupled to the compression circuit, where the control circuit is to enable the compression circuit to concurrently compress data blocks of a first traffic class and not to compress data blocks of a second traffic class. Other embodiments are described and claimed.
    Type: Grant
    Filed: June 19, 2019
    Date of Patent: August 31, 2021
    Assignee: Intel Corporation
    Inventors: Simon N. Peffers, Vinodh Gopal, Kirk Yap
  • Patent number: 11095305
    Abstract: An apparatus and method for performing efficient lossless compression.
    Type: Grant
    Filed: April 22, 2019
    Date of Patent: August 17, 2021
    Assignee: Intel Corporation
    Inventors: James Guilford, Kirk Yap, Vinodh Gopal, Daniel Cutter, Wajdi Feghali
  • Patent number: 10924591
    Abstract: Methods and apparatus for low-latency link compression schemes. Under the schemes, selected packets or messages are dynamically selected for compression in view of current transmit queue levels. The latency incurred during compression and decompression is not added to the data-path, but sits on the side of the transmit queue. The system monitors the queue depth and, accordingly, initiates compression jobs based on the depth. Different compression levels may be dynamically selected and used based on queue depth. Under various schemes, either packets or messages are enqueued in the transmit queue or pointers to such packets and messages are enqueued. Additionally, packets/message may be compressed prior to being enqueued, or after being enqueued, wherein an original uncompressed packet is replaced with a compressed packet. Compressed and uncompressed packets may be stored in queues or buffers and transmitted using a different numbers of transmit cycles based on their compression ratios.
    Type: Grant
    Filed: June 21, 2018
    Date of Patent: February 16, 2021
    Assignee: Intel Corporation
    Inventors: Wajdi Feghali, Vinodh Gopal, Kirk Yap, Sean Gulley, Simon Peffers
  • Publication number: 20200403779
    Abstract: An apparatus of an aspect includes an encryption unit to receive unencrypted data. The encryption unit is to encrypt the unencrypted data to generate encrypted data. The apparatus also includes circuitry coupled with the encryption unit. The circuitry is to generate a first checksum for a copy of the unencrypted data, generate a second checksum for a copy of the encrypted data, and combine the first and second checksums to generate a first value.
    Type: Application
    Filed: September 2, 2020
    Publication date: December 24, 2020
    Inventors: Vinodh GOPAL, Kirk YAP
  • Patent number: 10871983
    Abstract: Systems, methods, and circuitries are disclosed for a per-process memory encryption system. At least one translation lookaside buffer (TLB) is configured to encode key identifiers for keys in one or more bits of either the virtual memory address or the physical address. The process state memory configured to store a first process key table for a first process that maps key identifiers to unique keys and a second process key table that maps the key identifiers to different unique keys. The active process key table memory configured to store an active key table. In response to a request for data corresponding to a virtual memory address, the at least one TLB is configured to provide a key identifier for the data to the active process key table to cause the active process key table to return the unique key mapped to the key identifier.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: December 22, 2020
    Assignee: Intel Corporation
    Inventors: Wajdi Feghali, Vinodh Gopal, Kirk Yap, Sean Gulley, Raghunandan Makaram
  • Publication number: 20200319959
    Abstract: A processing device comprising compression circuitry to: determine a compression configuration to compress source data; generate a checksum of the source data in an uncompressed state; compress the source data into at least one block based on the compression configuration, wherein the at least one block comprises: a plurality of sub-blocks, wherein the plurality of sub-block includes a predetermined size; a block header corresponding to the plurality of sub-blocks; and decompression circuitry coupled to the compression circuitry, wherein the decompression circuitry to: while not outputting a decompressed data stream of the source data: generate index information corresponding to the plurality of sub-blocks; in response to generating the index information, generate a checksum of the compressed source data associated with the plurality of sub-blocks; and determine whether the checksum of the source data in the uncompressed format matches the checksum of the compressed source data.
    Type: Application
    Filed: June 22, 2020
    Publication date: October 8, 2020
    Inventors: Vinodh Gopal, James Guilford, Daniel Cutter, Kirk Yap
  • Patent number: 10691529
    Abstract: A processing device comprising compression circuitry to: determine a compression configuration to compress source data; generate a checksum of the source data in an uncompressed state; compress the source data into at least one block based on the compression configuration, wherein the at least one block comprises: a plurality of sub-blocks, wherein the plurality of sub-block includes a predetermined size; a block header corresponding to the plurality of sub-blocks; and decompression circuitry coupled to the compression circuitry, wherein the decompression circuitry to: while not outputting a decompressed data stream of the source data: generate index information corresponding to the plurality of sub-blocks; in response to generating the index information, generate a checksum of the compressed source data associated with the plurality of sub-blocks; and determine whether the checksum of the source data in the uncompressed format matches the checksum of the compressed source data.
    Type: Grant
    Filed: June 20, 2018
    Date of Patent: June 23, 2020
    Assignee: INTEL CORPORATION
    Inventors: Vinodh Gopal, James Guilford, Daniel Cutter, Kirk Yap
  • Publication number: 20200007329
    Abstract: Disclosed embodiments relate to encrypting or decrypting confidential data with additional authentication data by an accelerator and a processor. In one example, a processor includes processor circuitry to compute a first hash of a first block of data stored in a memory, store the first hash in the memory, and generate an authentication tag based in part on a second hash. The processor further includes accelerator circuitry to obtain the first hash from the memory, decrypt a second block of data using the first hash, and compute the second hash based in part on the first hash and the second block of data.
    Type: Application
    Filed: June 28, 2018
    Publication date: January 2, 2020
    Inventors: James GUILFORD, Vinodh GOPAL, Kirk YAP
  • Publication number: 20200004535
    Abstract: An apparatus and method for loading and storing multiple sets of packed data elements.
    Type: Application
    Filed: June 30, 2018
    Publication date: January 2, 2020
    Inventors: KIRK YAP, JAMES GUILFORD, DANIEL CUTTER, VINODH GOPAL, DANIIL SOKOLOV