Patents by Inventor Jimshed B. Mirza
Jimshed B. Mirza has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11935153Abstract: Data processing methods and devices are provided. A processing device comprises memory and a processor. The memory, which comprises a cache, is configured to store portions of data. The processor is configured to issue a store instruction to store one of the portions of data, provide identifying information associated with the one portion of data, compress the one portion of data; and store the compressed one portion of data across multiple lines of the cache using the identifying information. In an example, the one portion of data is a block of pixels and pixels and the processor is configured to request pixel data for a pixel of a compressed block of pixels, send additional requests for data for other pixels determined to belong to the compressed pixel block and provide an indication that the requests are for pixel data belonging to the compressed block of pixels.Type: GrantFiled: December 28, 2020Date of Patent: March 19, 2024Assignees: Advanced Micro Devices, Inc., ATI Technologies ULCInventors: Sergey Korobkov, Jimshed B. Mirza, Anthony Hung-Cheong Chan
-
Publication number: 20220207644Abstract: Data processing methods and devices are provided. A processing device comprises memory and a processor. The memory, which comprises a cache, is configured to store portions of data. The processor is configured to issue a store instruction to store one of the portions of data, provide identifying information associated with the one portion of data, compress the one portion of data; and store the compressed one portion of data across multiple lines of the cache using the identifying information. In an example, the one portion of data is a block of pixels and pixels and the processor is configured to request pixel data for a pixel of a compressed block of pixels, send additional requests for data for other pixels determined to belong to the compressed pixel block and provide an indication that the requests are for pixel data belonging to the compressed block of pixels.Type: ApplicationFiled: December 28, 2020Publication date: June 30, 2022Applicants: ATI Technologies ULC, Advanced Micro Devices, Inc.Inventors: Sergey Korobkov, Jimshed B. Mirza, Anthony Hung-Cheong Chan
-
Patent number: 10956338Abstract: A technique for improving performance of a cache is provided. The technique involves maintaining indicators of whether cache entries are dirty in a random access memory (“RAM”) that has a lower latency to a cache controller than the cache memory that stores the cache entries. When a request to invalidate one or more cache entries is received by the cache controller, the cache controller checks the RAM to determine whether any cache entries are dirty and thus should be written out to a backing store. Using the RAM removes the need to check the actual cache memory for whether cache entries are dirty, which reduces the latency associated with performing such checks and thus with performing cache invalidations.Type: GrantFiled: November 19, 2018Date of Patent: March 23, 2021Assignee: ATI Technologies ULCInventors: Leon King Nok Lai, Qian Ma, Jimshed B. Mirza
-
Patent number: 10915359Abstract: A technique for scheduling processing tasks having different latencies is provided. The technique involves identifying one or more available requests in a request queue, where each request queue corresponds to a different latency. A request arbiter examines a shift register to determine whether there is an available slot for the one or more requests. A slot is available for a request if there is a slot that is a number of slots from the end of the shift register equal to the number of cycles the request takes to complete processing in a corresponding processing pipeline. If a slot is available, the request is scheduled for execution and the slot is marked as being occupied. If a slot is not available, the request is not scheduled for execution on the current cycle. On transitioning to a new cycle, the shift register is shifted towards its end and the technique repeats.Type: GrantFiled: November 19, 2018Date of Patent: February 9, 2021Assignee: ATI Technologies ULCInventors: Jimshed B. Mirza, Qian Ma, Leon King Nok Lai
-
Publication number: 20200167076Abstract: A technique for improving performance of a data compression system is provided. The technique is applicable to compressed data sets that include compression blocks. Each compression block may be either compressed or uncompressed. Metadata indicating whether compression blocks are actually compressed or not is stored. If compression blocks are not compressed, then a read-decompress-modify-compress-write pipeline is bypassed. Instead, a compression unit writes the data specified by the partial request into the compression block, without reading, decompressing, modifying, recompressing, and writing the data, resulting in a much faster operation.Type: ApplicationFiled: November 26, 2018Publication date: May 28, 2020Applicant: ATI Technologies ULCInventors: Leon King Nok Lai, Qian Ma, Jimshed B. Mirza
-
Publication number: 20200167287Abstract: A technique for prefetching data for a cache is provided. The technique includes detecting access to a data block. In response to the detection, a prefetch block generates proposed blocks for prefetch. The prefetch block also examines prefetch tracking data to determine whether a prefetch group including the proposed blocks is marked as already having been prefetched. If the group has been marked as already having been prefetched, then then prefetch block does not prefetch that data, thereby avoiding traffic between the prefetch block and the cache memory. Using this technique, unnecessary requests to prefetch data into the cache memory are avoided.Type: ApplicationFiled: November 26, 2018Publication date: May 28, 2020Applicant: ATI Technologies ULCInventors: Leon King Nok Lai, Qian Ma, Jimshed B. Mirza
-
Patent number: 10664403Abstract: A technique for prefetching data for a cache is provided. The technique includes detecting access to a data block. In response to the detection, a prefetch block generates proposed blocks for prefetch. The prefetch block also examines prefetch tracking data to determine whether a prefetch group including the proposed blocks is marked as already having been prefetched. If the group has been marked as already having been prefetched, then prefetch block does not prefetch that data, thereby avoiding traffic between the prefetch block and the cache memory. Using this technique, unnecessary requests to prefetch data into the cache memory are avoided.Type: GrantFiled: November 26, 2018Date of Patent: May 26, 2020Assignee: ATI Technologies ULCInventors: Leon King Nok Lai, Qian Ma, Jimshed B. Mirza
-
Publication number: 20200159664Abstract: A technique for improving performance of a cache is provided. The technique involves maintaining indicators of whether cache entries are dirty in a random access memory (“RAM”) that has a lower latency to a cache controller than the cache memory that stores the cache entries. When a request to invalidate one or more cache entries is received by the cache controller, the cache controller checks the RAM to determine whether any cache entries are dirty and thus should be written out to a backing store. Using the RAM removes the need to check the actual cache memory for whether cache entries are dirty, which reduces the latency associated with performing such checks and thus with performing cache invalidations.Type: ApplicationFiled: November 19, 2018Publication date: May 21, 2020Applicant: ATI Technologies ULCInventors: Leon King Nok Lai, Qian Ma, Jimshed B. Mirza
-
Publication number: 20200159581Abstract: A technique for scheduling processing tasks having different latencies is provided. The technique involves identifying one or more available requests in a request queue, where each request queue corresponds to a different latency. A request arbiter examines a shift register to determine whether there is an available slot for the one or more requests. A slot is available for a request if there is a slot that is a number of slots from the end of the shift register equal to the number of cycles the request takes to complete processing in a corresponding processing pipeline. If a slot is available, the request is scheduled for execution and the slot is marked as being occupied. If a slot is not available, the request is not scheduled for execution on the current cycle. On transitioning to a new cycle, the shift register is shifted towards its end and the technique repeats.Type: ApplicationFiled: November 19, 2018Publication date: May 21, 2020Applicant: ATI Technologies ULCInventors: Jimshed B. Mirza, Qian Ma, Leon King Nok Lai
-
Patent number: 8495300Abstract: A method and apparatus for repopulating a cache are disclosed. At least a portion of the contents of the cache are stored in a location separate from the cache. Power is removed from the cache and is restored some time later. After power has been restored to the cache, it is repopulated with the portion of the contents of the cache that were stored separately from the cache.Type: GrantFiled: March 3, 2010Date of Patent: July 23, 2013Assignee: ATI Technologies ULCInventors: Philip Ng, Jimshed B. Mirza, Anthony Asaro
-
Publication number: 20130138897Abstract: A method and apparatus are described for controlling depth and power consumption of a first-in first-out (FIFO) memory including a data storage, a FIFO top register, a FIFO bottom register and control logic. The data storage may be segmented into a plurality of data storage segments. The FIFO top register may be configured to generate a first value indicating where a first entry in the data storage is stored. The FIFO bottom register may be configured to generate a second value indicating where a last entry in the data storage is stored. The control logic may be configured to determine which of the data storage segments to activate or deactivate based at least in part on the first and second values, and to monitor an available capacity and a write/read rate of the FIFO memory as data is read from and written to the activated data storage segments.Type: ApplicationFiled: November 29, 2011Publication date: May 30, 2013Applicant: ATI TECHNOLOGIES ULCInventor: Jimshed B. Mirza
-
Publication number: 20110219190Abstract: A method and apparatus for repopulating a cache are disclosed. At least a portion of the contents of the cache are stored in a location separate from the cache. Power is removed from the cache and is restored some time later. After power has been restored to the cache, it is repopulated with the portion of the contents of the cache that were stored separately from the cache.Type: ApplicationFiled: March 3, 2010Publication date: September 8, 2011Applicant: ATI Technologies ULCInventors: Philip Ng, Jimshed B. Mirza, Anthony Asaro