Patents by Inventor Dzung Q. Vu

Dzung Q. Vu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11954492
    Abstract: Techniques are disclosed relating to channel stalls or deactivations based on the latency of prior operations. In some embodiments, a processor includes a plurality of channel pipelines for a plurality of channels and a plurality of execution pipelines shared by the channel pipelines and configured to perform different types of operations provided by the channel pipelines. First scheduler circuitry may assign threads to channels and second scheduler circuitry may assign an operation from a given channel to a given execution pipeline based on decode of an operation for that channel. Dependency circuitry may, for a first operation that depends on a prior operation that uses one of the execution pipelines, determine, based on status information for the prior operation from the one of the execution pipelines, whether to stall the first operation or to deactivate a thread that includes the first operation from its assigned channel.
    Type: Grant
    Filed: November 10, 2022
    Date of Patent: April 9, 2024
    Assignee: Apple Inc.
    Inventors: Benjiman L. Goodman, Dzung Q. Vu, Robert Kenney
  • Publication number: 20240095035
    Abstract: Techniques are disclosed relating to channel stalls or deactivations based on the latency of prior operations. In some embodiments, a processor includes a plurality of channel pipelines for a plurality of channels and a plurality of execution pipelines shared by the channel pipelines and configured to perform different types of operations provided by the channel pipelines. First scheduler circuitry may assign threads to channels and second scheduler circuitry may assign an operation from a given channel to a given execution pipeline based on decode of an operation for that channel. Dependency circuitry may, for a first operation that depends on a prior operation that uses one of the execution pipelines, determine, based on status information for the prior operation from the one of the execution pipelines, whether to stall the first operation or to deactivate a thread that includes the first operation from its assigned channel.
    Type: Application
    Filed: November 10, 2022
    Publication date: March 21, 2024
    Inventors: Benjiman L. Goodman, Dzung Q. Vu, Robert Kenney
  • Patent number: 11727530
    Abstract: Techniques are disclosed relating to low-level instruction storage in a processing unit. In some embodiments, a graphics unit includes execution circuitry, decode circuitry, hazard circuitry, and caching circuitry. In some embodiments the execution circuitry is configured to execute clauses of graphics instructions. In some embodiments, the decode circuitry is configured to receive graphics instructions and a clause identifier for each received graphics instruction and to decode the received graphics instructions. In some embodiments, the caching circuitry includes a plurality of entries each configured to store a set of decoded instructions in the same clause. A given clause may be fetched and executed multiple times, e.g., for different SIMD groups, while stored in the caching circuitry.
    Type: Grant
    Filed: May 28, 2021
    Date of Patent: August 15, 2023
    Assignee: Apple Inc.
    Inventors: Andrew M. Havlir, Dzung Q. Vu, Liang Kai Wang
  • Publication number: 20210358078
    Abstract: Techniques are disclosed relating to low-level instruction storage in a processing unit. In some embodiments, a graphics unit includes execution circuitry, decode circuitry, hazard circuitry, and caching circuitry. In some embodiments the execution circuitry is configured to execute clauses of graphics instructions. In some embodiments, the decode circuitry is configured to receive graphics instructions and a clause identifier for each received graphics instruction and to decode the received graphics instructions. In some embodiments, the caching circuitry includes a plurality of entries each configured to store a set of decoded instructions in the same clause. A given clause may be fetched and executed multiple times, e.g., for different SIMD groups, while stored in the caching circuitry.
    Type: Application
    Filed: May 28, 2021
    Publication date: November 18, 2021
    Inventors: Andrew M. Havlir, Dzung Q. Vu, Liang Kai Wang
  • Patent number: 11023997
    Abstract: Techniques are disclosed relating to low-level instruction storage in a processing unit. In some embodiments, a graphics unit includes execution circuitry, decode circuitry, hazard circuitry, and caching circuitry. In some embodiments the execution circuitry is configured to execute clauses of graphics instructions. In some embodiments, the decode circuitry is configured to receive graphics instructions and a clause identifier for each received graphics instruction and to decode the received graphics instructions. In some embodiments, the caching circuitry includes a plurality of entries each configured to store a set of decoded instructions in the same clause. A given clause may be fetched and executed multiple times, e.g., for different SIMD groups, while stored in the caching circuitry.
    Type: Grant
    Filed: July 24, 2017
    Date of Patent: June 1, 2021
    Assignee: Apple Inc.
    Inventors: Andrew M. Havlir, Dzung Q. Vu, Liang Kai Wang
  • Publication number: 20170323420
    Abstract: Techniques are disclosed relating to low-level instruction storage in a processing unit. In some embodiments, a graphics unit includes execution circuitry, decode circuitry, hazard circuitry, and caching circuitry. In some embodiments the execution circuitry is configured to execute clauses of graphics instructions. In some embodiments, the decode circuitry is configured to receive graphics instructions and a clause identifier for each received graphics instruction and to decode the received graphics instructions. In some embodiments, the caching circuitry includes a plurality of entries each configured to store a set of decoded instructions in the same clause. A given clause may be fetched and executed multiple times, e.g., for different SIMD groups, while stored in the caching circuitry.
    Type: Application
    Filed: July 24, 2017
    Publication date: November 9, 2017
    Inventors: Andrew M. Havlir, Dzung Q. Vu, Liang Kai Wang
  • Patent number: 9727944
    Abstract: Techniques are disclosed relating to low-level instruction storage in a graphics unit. In some embodiments, a graphics unit includes execution circuitry, decode circuitry, hazard circuitry, and caching circuitry. In some embodiments the execution circuitry is configured to execute clauses of graphics instructions. In some embodiments, the decode circuitry is configured to receive graphics instructions and a clause identifier for each received graphics instruction and to decode the received graphics instructions. In some embodiments, the hazard circuitry is configured to generate hazard information that specifies dependencies between ones of the decoded graphics instructions in the same clause. In some embodiments, the caching circuitry includes a plurality of entries each configured to store a set of decoded instructions in the same clause and hazard information generated by the decode circuitry for the clause.
    Type: Grant
    Filed: June 22, 2015
    Date of Patent: August 8, 2017
    Assignee: Apple Inc.
    Inventors: Andrew M. Havlir, Dzung Q. Vu, Liang Kai Wang
  • Publication number: 20160371810
    Abstract: Techniques are disclosed relating to low-level instruction storage in a graphics unit. In some embodiments, a graphics unit includes execution circuitry, decode circuitry, hazard circuitry, and caching circuitry. In some embodiments the execution circuitry is configured to execute clauses of graphics instructions. In some embodiments, the decode circuitry is configured to receive graphics instructions and a clause identifier for each received graphics instruction and to decode the received graphics instructions. In some embodiments, the hazard circuitry is configured to generate hazard information that specifies dependencies between ones of the decoded graphics instructions in the same clause. In some embodiments, the caching circuitry includes a plurality of entries each configured to store a set of decoded instructions in the same clause and hazard information generated by the decode circuitry for the clause.
    Type: Application
    Filed: June 22, 2015
    Publication date: December 22, 2016
    Inventors: Andrew M. Havlir, Dzung Q. Vu, Liang Kai Wang
  • Patent number: 6628291
    Abstract: A frame buffer system includes a first frame buffer containing a first set of pixels, and a second frame buffer containing a second set of pixels. A first register is connected to an output of the first frame buffer, wherein the first register a number of pixels is stored in which a group of bytes of data is stored for each of the number of pixels. A second register is connected to an output of the second frame buffer, wherein the second register a number of pixels is stored in which a group of bytes of data is stored for each of the number of pixels. A selection logic is connected to the first frame buffer and to the second frame buffer. The selection logic selectively selects pixels to be read from the first frame buffer and the second frame buffer into the first register and the second register. A multiplexer has a first input connected to an output of the first register, a second input connected to an output of the second register, and an output configured for connection to a digital to analog converter.
    Type: Grant
    Filed: September 2, 1999
    Date of Patent: September 30, 2003
    Assignee: International Business Machines Corporation
    Inventors: Jimmie Darius Edrington, Charles Ray Johns, John Alvin Voltin, Dzung Q. Vu