Patents by Inventor Jason Edward Podaima

Jason Edward Podaima has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9747213
    Abstract: Providing memory management unit (MMU) partitioned translation caches, and related apparatuses, methods, and computer-readable media. In this regard, an apparatus comprising an MMU is provided. The MMU comprises a translation cache providing a plurality of translation cache entries defining address translation mappings. The MMU further comprises a partition descriptor table providing a plurality of partition descriptors defining a corresponding plurality of partitions each comprising one or more translation cache entries of the plurality of translation cache entries. The MMU also comprises a partition translation circuit configured to receive a memory access request from a requestor. The partition translation circuit is further configured to determine a translation cache partition identifier (TCPID) of the memory access request, identify one or more partitions of the plurality of partitions based on the TCPID, and perform the memory access request on a translation cache entry of the one or more partitions.
    Type: Grant
    Filed: May 29, 2015
    Date of Patent: August 29, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Jason Edward Podaima, Bohuslav Rychlik, Carlos Javier Moreira, Serag Monier GadelRab, Paul Christopher John Wiercienski, Alexander Miretsky, Kyle John Ernewein
  • Publication number: 20170091116
    Abstract: Providing memory management functionality using aggregated memory management units (MMUs), and related apparatuses and methods are disclosed. In one aspect, an aggregated MMU is provided, comprising a plurality of input data paths including each including plurality of input transaction buffers, and a plurality of output paths each including a plurality of output transaction buffers. Some aspects of the aggregated MMU additionally provide one or more translation caches and/or one or more hardware page table walkers The aggregated MMU further includes an MMU management circuit configured to retrieve a memory address translation request (MATR) from an input transaction buffer, perform a memory address translation operation based on the MATR to generate a translated memory address field (TMAF), and provide the TMAF to an output transaction buffer. The aggregated MMU also provides a plurality of output data paths, each configured to output transactions with resulting memory address translations.
    Type: Application
    Filed: September 25, 2015
    Publication date: March 30, 2017
    Inventors: Serag Monier GadelRab, Jason Edward Podaima, Ruolong Liu, Alexander Miretsky, Paul Christopher John Wiercienski, Kyle John Ernewein, Carlos Javier Moreira, Simon Peter William Booth, Meghal Varia, Thomas David Dryburgh
  • Publication number: 20170024145
    Abstract: Systems, methods, and computer program products are disclosed for reducing latency in a system that includes one or more processing devices, a system memory, and a cache memory. A pre-fetch command that identifies requested data is received from a requestor device. The requested data is pre-fetched from the system memory into the cache memory in response to the pre-fetch command. The data pre-fetch may be preceded by a pre-fetch of an address translation. A data access request corresponding to the pre-fetch command is then received, and in response to the data access request the data is provided from the cache memory to the requestor device.
    Type: Application
    Filed: July 23, 2015
    Publication date: January 26, 2017
    Inventors: TAREK ZGHAL, Alain Dominique Artieri, Jason Edward Podaima, Meghal Varia, Serag GadelRab
  • Publication number: 20160350222
    Abstract: Providing memory management unit (MMU) partitioned translation caches, and related apparatuses, methods, and computer-readable media. In this regard, in one aspect, an apparatus comprising an MMU is provided. The MMU comprises a translation cache providing a plurality of translation cache entries defining address translation mappings. The MMU further comprises a partition descriptor table providing a plurality of partition descriptors defining a corresponding plurality of partitions each comprising one or more translation cache entries of the plurality of translation cache entries. The MMU also comprises a partition translation circuit configured to receive a memory access request from a requestor.
    Type: Application
    Filed: May 29, 2015
    Publication date: December 1, 2016
    Inventors: Jason Edward Podaima, Bohuslav Rychlik, Carlos Javier Moreira, Serag Monier GadelRab, Paul Christopher John Wiercienski, Alexander Miretsky, Kyle John Ernewein
  • Publication number: 20160350225
    Abstract: Systems and methods for pre-fetching address translations in a memory management unit (MMU) are disclosed. The MMU detects a triggering condition related to one or more translation caches associated with the MMU, the triggering condition associated with a trigger address, generates a sequence descriptor describing a sequence of address translations to pre-fetch into the one or more translation caches, the sequence of address translations comprising a plurality of address translations corresponding to a plurality of address ranges adjacent to an address range containing the trigger address, and issues an address translation request to the one or more translation caches for each of the plurality of address translations, wherein the one or more translation caches pre-fetch at least one address translation of the plurality of address translations into the one or more translation caches when the at least one address translation is not present in the one or more translation caches.
    Type: Application
    Filed: May 29, 2015
    Publication date: December 1, 2016
    Inventors: Jason Edward PODAIMA, Paul Christopher John WIERCIENSKI, Kyle John ERNEWEIN, Carlos Javier MOREIRA, Meghal VARIA, Serag GADELRAB, Muhammad Umar CHOUDRY
  • Publication number: 20160350234
    Abstract: Systems and methods relate to performing address translations in a multithreaded memory management unit (MMU). Two or more address translation requests can be received by the multithreaded MMU and processed in parallel to retrieve address translations to addresses of a system memory. If the address translations are present in a translation cache of the multithreaded MMU, the address translations can be received from the translation cache and scheduled for access of the system memory using the translated addresses. If there is a miss in the translation cache, two or more address translation requests can be scheduled in two or more translation table walks in parallel.
    Type: Application
    Filed: September 20, 2015
    Publication date: December 1, 2016
    Inventors: Jason Edward PODAIMA, Paul Christopher John WIERCIENSKI, Carlos Javier MOREIRA, Alexander MIRETSKY, Meghal VARIA, Kyle John ERNEWEIN, Manokanthan SOMASUNDARAM, Muhammad Umar CHOUDRY, Serag Monier GADELRAB
  • Publication number: 20160306746
    Abstract: A comparand that includes a virtual address is received. Upon determining a match of the comparand to a burst entry tag, a candidate matching translation data unit is selected. The selecting is from a plurality of translation data units associated with the burst entry tag, and is based at least in part on at least one bit of the virtual address. Content of the candidate matching translation data unit is compared to at least a portion of the comparand. Upon a match, a hit is generated.
    Type: Application
    Filed: September 25, 2015
    Publication date: October 20, 2016
    Inventors: Jason Edward PODAIMA, Paul Christopher John WIERCIENSKI, Alexander MIRETSKY
  • Publication number: 20160283384
    Abstract: Methods and systems for pre-fetching address translations in a memory management unit (MMU) of a device are disclosed. In an embodiment, the MMU receives a pre-fetch command from an upstream component of the device, the pre-fetch command including an address of an instruction, pre-fetches a translation of the instruction from a translation table in a memory of the device, and stores the translation of the instruction in a translation cache associated with the MMU.
    Type: Application
    Filed: March 28, 2015
    Publication date: September 29, 2016
    Inventors: Jason Edward PODAIMA, Bohuslav RYCHLIK, Paul Christopher John WIERCIENSKI, Kyle John ERNEWEIN, Carlos Javier MOREIRA, Meghal VARIA, Serag GADELRAB
  • Publication number: 20160019168
    Abstract: The aspects include systems and methods of managing virtual memory page shareability. A processor or memory management unit may set in a page table an indication that a virtual memory page is not shareable with an outer domain processor. The processor or memory management unit may monitor for when the outer domain processor attempts or has attempted to access the virtual memory page. In response to the outer domain processor attempting to access the virtual memory page, the processor may perform a virtual memory page operation on the virtual memory page.
    Type: Application
    Filed: October 9, 2014
    Publication date: January 21, 2016
    Inventors: Bohuslav Rychlik, Jason Edward Podaima, Andrew Evan Gruber, Tzung Ren Tzeng, Zhenbiao Ma
  • Patent number: 6862655
    Abstract: A content addressable memory (CAM) is provided that can perform wide word searches. At least one CAM memory core having a plurality of bit pattern entry rows is included in the CAM. In addition, search logic is included that, is capable searching particular rows during each cycle. The search logic is also capable of allowing match line results of unsearched rows to remain unchanged during a cycle. The CAM further includes a serial AND array in communication with the bit pattern entry rows, wherein the serial AND array is capable of computing a match result for wide word entries that span multiple bit pattern entry rows. In one aspect, a match line enable signal is provided to the serial AND array, which facilitates computation of the match result.
    Type: Grant
    Filed: October 1, 2002
    Date of Patent: March 1, 2005
    Assignee: SiberCore Technologies, Inc.
    Inventors: Jason Edward Podaima, Sanjay Gupta, G. F. Randall Gibson, Radu Avramescu
  • Patent number: 6775167
    Abstract: An invention is provided for low power searching in a CAM using sample words to save power in the compare lines. The invention includes comparing a sample section of stored data to a corresponding sample section of search data on a plurality of rows in the CAM. If a sample section of the stored data on any row of the plurality of rows is equivalent to the corresponding sample section of the search data, a remaining section of search data is allowed to propagate to the local compare lines coupled to the remaining section of the stored data of each row. However, if the sample section of the stored data is different from the corresponding sample section of the search data, the local compare lines coupled to the remaining section of the stored data on each row are latched.
    Type: Grant
    Filed: March 11, 2003
    Date of Patent: August 10, 2004
    Assignee: SiberCore Technologies, Inc.
    Inventors: Radu Avramescu, Jason Edward Podaima
  • Patent number: 6538911
    Abstract: An invention is disclosed for a content addressable memory (CAM) with a block select for power management. The CAM includes a plurality of memory blocks for storing data addressable within the CAM, and a search port in communication with the plurality of memory blocks. The search port is capable of facilitating search operations using the memory blocks. Also included in the CAM is a block select bus capable of selecting at least one specific memory block from the plurality of memory blocks. By using the block select bus, the search operations are performed using only the selected memory blocks. Similar to search operations, the block select signal or a similar signal can also be used to select specific memory blocks, wherein maintenance operations are performed using only the selected memory blocks.
    Type: Grant
    Filed: August 24, 2001
    Date of Patent: March 25, 2003
    Assignee: SiberCore Technologies, Inc.
    Inventors: Graham A. Allan, G. F. Randall Gibson, Jason Edward Podaima
  • Patent number: 6392910
    Abstract: A priority resolver for use in a CAM circuit priority encoder is provided. The priority resolver includes one or more priority resolver sub-units. Each priority resolver sub-unit includes an local hit (pehit) generation circuitry. The local hit (pehit) generation circuitry is configured to generate pehit data. Also provided as part of a priority resolver sub-unit is a resolve processing circuit that is coupled to the local hit (pehit) generation circuitry. The resolve processing circuit is configured to receive the pehit data and an enable signal. Preferably, the resolve processing circuit includes input gating circuitry. An output differentiator and gating circuit is further provided as part of the priority resolver sub-unit and is configured to receive an output of the resolve processing circuit. In this embodiment, the priority resolver sub-unit is implemented in one or more stages of the priority resolver, and each stage is configured to include one or more priority resolver sub-units.
    Type: Grant
    Filed: August 17, 2000
    Date of Patent: May 21, 2002
    Assignee: SiberCore Technologies, Inc.
    Inventors: Jason Edward Podaima, Kenneth J. Schultz
  • Patent number: 6044005
    Abstract: Binary and ternary content addressable memory (CAM) cells are disclosed, which permit the construction of high-performance, large-capacity CAM arrays. The CAM cells have a reduced match line power dissipation, and a reduced compare line loading that is data independent, and full swing comparator output. Match line power dissipation is limited by means of a NAND chain match line. Loading on compare lines is limited by connecting compare lines to the gate terminals of the CAM cell comparator. Local precharge devices at the output of the comparator provide full swing compare logic levels for faster matching. The same precharge devices also serve as an active reset for the comparator. Comparator circuits for ternary CAM cells further employ disable means, which makes the comparison operation conditional on the value stored in the mask memory element. The use of disable means allows the mask and data to be stored separately in a non-encoded form.
    Type: Grant
    Filed: February 3, 1999
    Date of Patent: March 28, 2000
    Assignee: Sibercore Technologies Incorporated
    Inventors: Garnet Fredrick Randall Gibson, Farhard Shafai, Jason Edward Podaima
  • Patent number: RE42684
    Abstract: A content addressable memory (CAM) is provided that can perform wide word searches. At least one CAM memory core having a plurality of bit pattern entry rows is included in the CAM. In addition, search logic is included that, is capable searching particular rows during each cycle. The search logic is also capable of allowing match line results of unsearched rows to remain unchanged during a cycle. The CAM further includes a serial AND array in communication with the bit pattern entry rows, wherein the serial AND array is capable of computing a match result for wide word entries that span multiple bit pattern entry rows. In one aspect, a match line enable signal is provided to the serial AND array, which facilitates computation of the match result.
    Type: Grant
    Filed: February 14, 2007
    Date of Patent: September 6, 2011
    Assignee: Core Networks LLC
    Inventors: Jason Edward Podaima, Sanjay Gupta, Randall Gibson, Radu Avramescu
  • Patent number: RE43359
    Abstract: An invention is provided for low Low power searching in a CAM using uses sample words to save power in the compare lines. The invention A method includes comparing a sample section of stored data to a corresponding sample section of search data on a plurality of rows in the CAM. If a sample section of the stored data on any row of the plurality of rows is equivalent to the corresponding sample section of the search data, a remaining section of search data is allowed to propagate to the local compare lines coupled to the remaining section of the stored data of each row. However, if the sample section of the stored data is different from the corresponding sample section of the search data, the local compare lines coupled to the remaining section of the stored data on each row are latched.
    Type: Grant
    Filed: August 10, 2006
    Date of Patent: May 8, 2012
    Assignee: Core Networks LLC
    Inventors: Radu Avramescu, Jason Edward Podaima