Patents by Inventor Albert Ma
Albert Ma has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11960727Abstract: A system and corresponding method perform large memory transaction (LMT) stores. The system comprises a processor associated with a data-processing width and a processor accelerator. The processor accelerator performs a LMT store of a data set to a coprocessor in response to an instruction from the processor targeting the coprocessor. The data set corresponds to the instruction. The LMT store includes storing data from the data set, atomically, to the coprocessor based on a LMT line (LMTLINE). The LMTLINE is wider than the data-processing width. The processor accelerator sends, to the processor, a response to the instruction. The response is based on completion of the LMT store of the data set in its entirety. The processor accelerator enables the processor to perform useful work in parallel with the LMT store, thereby improving processing performance of the processor.Type: GrantFiled: September 30, 2022Date of Patent: April 16, 2024Assignee: Marvell Asia Pte LtdInventors: Aadeetya Shreedhar, Jason D. Zebchuk, Wilson P. Snyder, II, Albert Ma, Joseph Featherston
-
Patent number: 11620225Abstract: A circuit and corresponding method map memory addresses onto cache locations within set-associative (SA) caches of various cache sizes. The circuit comprises a modulo-arithmetic circuit that performs a plurality of modulo operations on an input memory address and produces a plurality of modulus results based on the plurality of modulo operations performed. The plurality of modulo operations performed are based on a cache size associated with an SA cache. The circuit further comprises a multiplexer circuit and an output circuit. The multiplexer circuit outputs selected modulus results by selecting modulus results from among the plurality of modulus results produced. The selecting is based on the cache size. The output circuit outputs a cache location within the SA cache based on the selected modulus results and the cache size. Such mapping of the input memory address onto the cache location is performed at a lower cost relative to a general-purpose divider.Type: GrantFiled: July 8, 2022Date of Patent: April 4, 2023Assignee: Marvell Asia Pte LtdInventor: Albert Ma
-
Patent number: 11474953Abstract: A method of translating a virtual address into a physical memory address in an ARM System Memory Management Unit version 3 (SMMUv3) system includes searching a Configuration Cache memory for a matching tag that matches an associated tag upon receiving the virtual address and the associated tag, and extracting, in a single memory lookup cycle, a matching data field associated with the matching tag when the matching tag is found in the Configuration Cache memory. A matching data field of the Configuration Cache memory includes a matching Stream Table Entry (STE) and a matching Context Descriptor (CD), both associated with the matching tag. The Configuration Cache memory may be configured as a content-addressable memory. The method further includes storing entries associated with a multiple memory lookup cycle virtual address-to-physical address translation into the Configuration Cache memory, each of the entries including a tag, an associated STE and an associated CD.Type: GrantFiled: October 12, 2018Date of Patent: October 18, 2022Assignee: MARVELL ASIA PTE, LTD.Inventors: Manan Salvi, Albert Ma
-
Patent number: 11416405Abstract: A circuit and corresponding method map memory addresses onto cache locations within set-associative (SA) caches of various cache sizes. The circuit comprises a modulo-arithmetic circuit that performs a plurality of modulo operations on an input memory address and produces a plurality of modulus results based on the plurality of modulo operations performed. The plurality of modulo operations performed are based on a cache size associated with an SA cache. The circuit further comprises a multiplexer circuit and an output circuit. The multiplexer circuit outputs selected modulus results by selecting modulus results from among the plurality of modulus results produced. The selecting is based on the cache size. The output circuit outputs a cache location within the SA cache based on the selected modulus results and the cache size. Such mapping of the input memory address onto the cache location is performed at a lower cost relative to a general-purpose divider.Type: GrantFiled: February 5, 2021Date of Patent: August 16, 2022Assignee: MARVELL ASIA PTE LTDInventor: Albert Ma
-
Publication number: 20220098167Abstract: Compounds of general formula (I) and their tautomeric forms all enantiomers and isotopic variants and salts and solvates thereof: wherein represents a single or a double bond and R1, R2, X1, X2, X3, X4, X5, Y and Z are as defined herein; are useful for treating respiratory disease and other diseases and conditions modulated by TMEM16A.Type: ApplicationFiled: December 10, 2021Publication date: March 31, 2022Inventors: Stephen COLLINGWOOD, Clive MCCARTHY, Duncan Alexander HAY, Jonathan David HARGRAVE, Albert MA, Thomas Beauregard SCHOFIELD, Matthew SMITH, Edward WALKER, Naomi WENT, Peter INGRAM, Christopher STIMSON, Someina KHOR
-
Publication number: 20200117613Abstract: A method of translating a virtual address into a physical memory address in an ARM SMMUv3 system may comprise searching a Configuration Cache memory for a matching tag that matches the associated tag upon receiving the virtual address and an associated tag, and extracting, in a single memory lookup cycle, a matching data field associated with the matching tag when the matching tag is found in the Configuration Cache memory. The matching data field of the Configuration Cache may comprise a matching Stream Table Entry (STE) and a matching Context Descriptor (CD), both associated with the matching tag. The Configuration Cache may be configured as a content-addressable memory. The method may further comprise storing entries associated with a multiple memory lookup cycle virtual address-to-physical address translation into the Configuration Cache memory, each of the entries comprising a tag, an associated STE and an associated CD.Type: ApplicationFiled: October 12, 2018Publication date: April 16, 2020Inventors: Manan Salvi, Albert Ma
-
Patent number: 10339054Abstract: Execution of the memory instructions is managed using memory management circuitry including a first cache that stores a plurality of the mappings in the page table, and a second cache that stores entries based on virtual addresses. The memory management circuitry executes operations from the one or more modules, including, in response to a first operation that invalidates at least a first virtual address, selectively ordering each of a plurality of in progress operations that were in progress when the first operation was received by the memory management circuitry, wherein a position in the ordering of a particular in progress operation depends on either or both of: (1) which of one or more modules initiated the particular in progress operation, or (2) whether or not the particular in progress operation provides results to the first cache or second cache.Type: GrantFiled: February 7, 2018Date of Patent: July 2, 2019Assignee: Cavium, LLCInventors: Shubhendu Sekhar Mukherjee, Albert Ma, Mike Bertone
-
Patent number: 10303514Abstract: In an embodiment, a method of providing quality of service (QoS) to at least one resource of a hardware processor includes providing, in a memory of the hardware processor, a context including at least one quality of service parameter and allocating access to the at least one resource of the hardware processor based on the quality of service parameter of the context, a device identifier, a virtual machine identifier, and the context.Type: GrantFiled: November 13, 2015Date of Patent: May 28, 2019Assignee: Cavium, LLCInventors: Wilson P. Snyder, II, Varada Ogale, Anna Kujtkowski, Albert Ma
-
Patent number: 10078601Abstract: In an embodiment, interfacing a pipeline with two or more interfaces in a hardware processor includes providing a single pipeline in a hardware processor. The single pipeline presents at least two visible units. The single pipeline includes replicated architecturally visible structures, shared logic resources, and shared architecturally hidden structures. The method further includes receiving a request from one of a plurality of interfaces at one of the visible units. The method also includes tagging the request with an identifier based on the one of the at least two visible units that received the request. The method further includes processing the request in the single pipeline by propagating the request through the single pipeline through the replicated architecturally visible structures that correspond with the identifier.Type: GrantFiled: November 13, 2015Date of Patent: September 18, 2018Assignee: Cavium, Inc.Inventors: Wilson P. Snyder, II, Anna Kujtkowski, Albert Ma, Paul G. Scrobohaci
-
Publication number: 20180165197Abstract: Execution of the memory instructions is managed using memory management circuitry including a first cache that stores a plurality of the mappings in the page table, and a second cache that stores entries based on virtual addresses. The memory management circuitry executes operations from the one or more modules, including, in response to a first operation that invalidates at least a first virtual address, selectively ordering each of a plurality of in progress operations that were in progress when the first operation was received by the memory management circuitry, wherein a position in the ordering of a particular in progress operation depends on either or both of: (1) which of one or more modules initiated the particular in progress operation, or (2) whether or not the particular in progress operation provides results to the first cache or second cache.Type: ApplicationFiled: February 7, 2018Publication date: June 14, 2018Inventors: Shubhendu Sekhar Mukherjee, Albert Ma, Mike Bertone
-
Patent number: 9910776Abstract: Execution of the memory instructions is managed using memory management circuitry including a first cache that stores a plurality of the mappings in the page table, and a second cache that stores entries based on virtual addresses. The memory management circuitry executes operations from the one or more modules, including, in response to a first operation that invalidates at least a first virtual address, selectively ordering each of a plurality of in progress operations that were in progress when the first operation was received by the memory management circuitry, wherein a position in the ordering of a particular in progress operation depends on either or both of: (1) which of one or more modules initiated the particular in progress operation, or (2) whether or not the particular in progress operation provides results to the first cache or second cache.Type: GrantFiled: November 14, 2014Date of Patent: March 6, 2018Assignee: Cavium, Inc.Inventors: Shubhendu Sekhar Mukherjee, Albert Ma, Mike Bertone
-
Patent number: 9678717Abstract: In an embodiment, a method include, in a hardware processor, producing, by a block of hardware logic resources, a constrained randomly generated or pseudo-randomly generated number (CRGN) based on a bit mask stored in a register memory.Type: GrantFiled: November 13, 2015Date of Patent: June 13, 2017Assignee: CAVIUM, INC.Inventors: Wilson P. Snyder, II, Varada Ogale, Anna Kujtkowski, Albert Ma
-
Patent number: 9405702Abstract: A core executes memory instructions. A memory management unit (MMU) coupled to the core includes a first cache that stores a plurality of final mappings of a hierarchical page table, a page table walker that traverses levels of the page table to provide intermediate results associated with respective levels for determining the final mappings, and a second cache that stores a limited number of intermediate results provided by the page table walker. The MMU compares a portion of the first virtual address to portions of entries in the second cache, in response to a request from the core to invalidate a first virtual address, based on a match criterion that depends on the level associated with each intermediate result stored in an entry in the second cache, and removes any entries in the second cache that satisfy the match criterion.Type: GrantFiled: November 14, 2014Date of Patent: August 2, 2016Assignee: Cavium, Inc.Inventors: Shubhendu Sekhar Mukherjee, Mike Bertone, Albert Ma
-
Publication number: 20160140048Abstract: A core executes memory instructions. A memory management unit (MMU) coupled to the core includes a first cache that stores a plurality of final mappings of a hierarchical page table, a page table walker that traverses levels of the page table to provide intermediate results associated with respective levels for determining the final mappings, and a second cache that stores a limited number of intermediate results provided by the page table walker. The MMU compares a portion of the first virtual address to portions of entries in the second cache, in response to a request from the core to invalidate a first virtual address, based on a match criterion that depends on the level associated with each intermediate result stored in an entry in the second cache, and removes any entries in the second cache that satisfy the match criterion.Type: ApplicationFiled: November 14, 2014Publication date: May 19, 2016Inventors: Shubhendu Sekhar Mukherjee, Mike Bertone, Albert Ma
-
Publication number: 20160140043Abstract: Execution of the memory instructions is managed using memory management circuitry including a first cache that stores a plurality of the mappings in the page table, and a second cache that stores entries based on virtual addresses. The memory management circuitry executes operations from the one or more modules, including, in response to a first operation that invalidates at least a first virtual address, selectively ordering each of a plurality of in progress operations that were in progress when the first operation was received by the memory management circuitry, wherein a position in the ordering of a particular in progress operation depends on either or both of: (1) which of one or more modules initiated the particular in progress operation, or (2) whether or not the particular in progress operation provides results to the first cache or second cache.Type: ApplicationFiled: November 14, 2014Publication date: May 19, 2016Inventors: Shubhendu Sekhar Mukherjee, Albert Ma, Mike Bertone
-
Publication number: 20160139883Abstract: In an embodiment, a method include, in a hardware processor, producing, by a block of hardware logic resources, a constrained randomly generated or pseudo-randomly generated number (CRGN) based on a bit mask stored in a register memory.Type: ApplicationFiled: November 13, 2015Publication date: May 19, 2016Inventors: Wilson P. Snyder, II, Varada Ogale, Anna Kujtkowski, Albert Ma
-
Publication number: 20160139950Abstract: In an embodiment, a method of providing quality of service (QoS) to at least one resource of a hardware processor includes providing, in a memory of the hardware processor, a context including at least one quality of service parameter and allocating access to the at least one resource of the hardware processor based on the quality of service parameter of the context, a device identifier, a virtual machine identifier, and the context.Type: ApplicationFiled: November 13, 2015Publication date: May 19, 2016Inventors: Wilson P. Snyder, II, Varada Ogale, Anna Kujtkowski, Albert Ma
-
Publication number: 20160140059Abstract: In an embodiment, interfacing a pipeline with two or more interfaces in a hardware processor includes providing a single pipeline in a hardware processor. The single pipeline presents at least two visible units. The single pipeline includes replicated architecturally visible structures, shared logic resources, and shared architecturally hidden structures. The method further includes receiving a request from one of a plurality of interfaces at one of the visible units. The method also includes tagging the request with an identifier based on the one of the at least two visible units that received the request. The method further includes processing the request in the single pipeline by propagating the request through the single pipeline through the replicated architecturally visible structures that correspond with the identifier.Type: ApplicationFiled: November 13, 2015Publication date: May 19, 2016Inventors: Wilson P. Snyder, II, Anna Kujtkowski, Albert Ma, Paul G. Scrobohaci