Patents by Inventor Ranjit J. Rozario

Ranjit J. Rozario has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240160353
    Abstract: Aspects relate to Input/Output (IO) Memory Management Units (MMUs) that include hardware structures for implementing virtualization. Some implementations allow guests to setup and maintain device IO tables within memory regions to which those guests have been given permissions by a hypervisor. Some implementations provide hardware page table walking capability within the IOMMU, while other implementations provide static tables. Such static tables may be maintained by a hypervisor on behalf of guests. Some implementations reduce a frequency of interrupts or invocation of hypervisor by allowing transactions to be setup by guests without hypervisor involvement within their assigned device IO regions. Devices may communicate with IOMMU to setup the requested memory transaction, and completion thereof may be signaled to the guest without hypervisor involvement. Various other aspects will be evident from the disclosure.
    Type: Application
    Filed: January 10, 2024
    Publication date: May 16, 2024
    Inventors: Sanjay Patel, Ranjit J. Rozario
  • Patent number: 11907542
    Abstract: Aspects relate to Input/Output (IO) Memory Management Units (MMUs) that include hardware structures for implementing virtualization. Some implementations allow guests to setup and maintain device IO tables within memory regions to which those guests have been given permissions by a hypervisor. Some implementations provide hardware page table walking capability within the IOMMU, while other implementations provide static tables. Such static tables may be maintained by a hypervisor on behalf of guests. Some implementations reduce a frequency of interrupts or invocation of hypervisor by allowing transactions to be setup by guests without hypervisor involvement within their assigned device IO regions. Devices may communicate with IOMMU to setup the requested memory transaction, and completion thereof may be signaled to the guest without hypervisor involvement. Various other aspects will be evident from the disclosure.
    Type: Grant
    Filed: December 9, 2022
    Date of Patent: February 20, 2024
    Assignee: MIPS Tech, LLC
    Inventors: Sanjay Patel, Ranjit J. Rozario
  • Publication number: 20230105881
    Abstract: Aspects relate to Input/Output (IO) Memory Management Units (MMUs) that include hardware structures for implementing virtualization. Some implementations allow guests to setup and maintain device IO tables within memory regions to which those guests have been given permissions by a hypervisor. Some implementations provide hardware page table walking capability within the IOMMU, while other implementations provide static tables. Such static tables may be maintained by a hypervisor on behalf of guests. Some implementations reduce a frequency of interrupts or invocation of hypervisor by allowing transactions to be setup by guests without hypervisor involvement within their assigned device IO regions. Devices may communicate with IOMMU to setup the requested memory transaction, and completion thereof may be signaled to the guest without hypervisor involvement. Various other aspects will be evident from the disclosure.
    Type: Application
    Filed: December 9, 2022
    Publication date: April 6, 2023
    Inventors: Sanjay Patel, Ranjit J. ROZARIO
  • Patent number: 11599270
    Abstract: Aspects relate to Input/Output (IO) Memory Management Units (MMUs) that include hardware structures for implementing virtualization. Some implementations allow guests to setup and maintain device IO tables within memory regions to which those guests have been given permissions by a hypervisor. Some implementations provide hardware page table walking capability within the IOMMU, while other implementations provide static tables. Such static tables may be maintained by a hypervisor on behalf of guests. Some implementations reduce a frequency of interrupts or invocation of hypervisor by allowing transactions to be setup by guests without hypervisor involvement within their assigned device IO regions. Devices may communicate with IOMMU to setup the requested memory transaction, and completion thereof may be signaled to the guest without hypervisor involvement. Various other aspects will be evident from the disclosure.
    Type: Grant
    Filed: May 4, 2020
    Date of Patent: March 7, 2023
    Inventors: Sanjay Patel, Ranjit J Rozario
  • Publication number: 20200264783
    Abstract: Aspects relate to Input/Output (TO) Memory Management Units (MMUs) that include hardware structures for implementing virtualization. Some implementations allow guests to setup and maintain device IO tables within memory regions to which those guests have been given permissions by a hypervisor. Some implementations provide hardware page table walking capability within the IOMMU, while other implementations provide static tables. Such static tables may be maintained by a hypervisor on behalf of guests. Some implementations reduce a frequency of interrupts or invocation of hypervisor by allowing transactions to be setup by guests without hypervisor involvement within their assigned device IO regions. Devices may communicate with IOMMU to setup the requested memory transaction, and completion thereof may be signaled to the guest without hypervisor involvement. Various other aspects will be evident from the disclosure.
    Type: Application
    Filed: May 4, 2020
    Publication date: August 20, 2020
    Inventors: Sanjay Patel, Ranjit J. ROZARIO
  • Patent number: 10671391
    Abstract: In an aspect, a processor supports modeless execution of 64 bit and 32 bit instructions. A Load/Store Unit (LSU) decodes an instruction that without explicit opcode data indicating whether the instruction is to operate in a 32 or 64 bit memory address space. LSU treats the instruction either as a 32 or 64 bit instruction in dependence on values in an upper 32 bits of one or more 64 bit operands supplied to create an effective address in memory. In an example, a 4 GB space addressed by 32-bit memory space is divided between upper and lower portions of a 64-bit address space, such that a 32-bit instruction is differentiated from a 64-bit instruction in dependence on whether an upper 32 bits of one or more operands is either all binary 1 or all binary 0. Such a processor may support decoding of different arithmetic instructions for 32-bit and 64-bit operations.
    Type: Grant
    Filed: February 2, 2015
    Date of Patent: June 2, 2020
    Assignee: MIPS Tech, LLC
    Inventors: Ranganathan Sudhakar, Ranjit J Rozario
  • Patent number: 10649773
    Abstract: A system and method process atomic instructions. A processor system includes a load store unit (LSU), first and second registers, a memory interface, and a main memory. In response to a load link (LL) instruction, the LSU loads first data from memory into the first register and sets an LL bit (LLBIT) to indicate a sequence of atomic instructions is being executed. The LSU further loads second data from memory into the second register in response to a load (LD) instruction. The LSU places a value of the second register into the memory interface in response to a store conditional coupled (SCX) instruction. When the LLBIT is set and in response to a store (SC) instruction, the LSU places a value of the second register into the memory interface and commits the first and second register values in the memory interface into the main memory when the LLBIT is set.
    Type: Grant
    Filed: April 7, 2016
    Date of Patent: May 12, 2020
    Assignee: MIPS Tech, LLC
    Inventors: Ranjit J. Rozario, Andrew F. Glew, Sanjay Patel, James Robinson, Sudhakar Ranganathan
  • Patent number: 10642501
    Abstract: Aspects relate to Input/Output (IO) Memory Management Units (MMUs) that include hardware structures for implementing virtualization. Some implementations allow guests to setup and maintain device IO tables within memory regions to which those guests have been given permissions by a hypervisor. Some implementations provide hardware page table walking capability within the IOMMU, while other implementations provide static tables. Such static tables may be maintained by a hypervisor on behalf of guests. Some implementations reduce a frequency of interrupts or invocation of hypervisor by allowing transactions to be setup by guests without hypervisor involvement within their assigned device IO regions. Devices may communicate with IOMMU to setup the requested memory transaction, and completion thereof may be signaled to the guest without hypervisor involvement. Various other aspects will be evident from the disclosure.
    Type: Grant
    Filed: January 5, 2015
    Date of Patent: May 5, 2020
    Assignee: MIPS Tech, LLC
    Inventors: Sanjay Patel, Ranjit J Rozario
  • Patent number: 10185665
    Abstract: Embodiments disclosed pertain to apparatuses, systems, and methods for Translation Lookaside Buffers (TLBs) that support visualization and multi-threading. Disclosed embodiments pertain to a TLB that includes a content addressable memory (CAM) with variable page size entries and a set associative memory with fixed page size entries. The CAM may include: a first set of logically contiguous entry locations, wherein the first set comprises a plurality of subsets, and each subset comprises logically contiguous entry locations for exclusive use of a corresponding virtual processing element (VPE); and a second set of logically contiguous entry locations, distinct from the first set, where the entry locations in the second set may be shared among available VPEs. The set associative memory may comprise a third set of logically contiguous entry locations shared among the available VPEs distinct from the first and second set of entry locations.
    Type: Grant
    Filed: November 28, 2017
    Date of Patent: January 22, 2019
    Assignee: MIPS Tech, LLC
    Inventors: Ranjit J. Rozario, Sanjay Patel
  • Patent number: 10108548
    Abstract: In one aspect, a processor has a register file, a private Level 1 (L1) cache, and an interface to a shared memory hierarchy (e.g., an Level 2 (L2) cache and so on). The processor has a Load Store Unit (LSU) that handles decoded load and store instructions. The processor may support out of order and multi-threaded execution. As store instructions arrive at the LSU for processing, the LSU determines whether a counter, from a set of counters, is allocated to a cache line affected by each store. If not, the LSU allocates a counter. If so, then the LSU updates the counter. Also, in response to a store instruction, affecting a cache line neighboring a cache line that has a counter that meets a criteria, the LSU characterizes that store instruction as one to be effected without obtaining ownership of the effected cache line, and provides that store to be serviced by an element of the shared memory hierarchy.
    Type: Grant
    Filed: August 18, 2015
    Date of Patent: October 23, 2018
    Assignee: MIPS Tech, LLC
    Inventors: Ranjit J. Rozario, Era Nangia, Debasish Chandra, Ranganathan Sudhakar
  • Publication number: 20180089102
    Abstract: Embodiments disclosed pertain to apparatuses, systems, and methods for Translation Lookaside Buffers (TLBs) that support visualization and multi-threading. Disclosed embodiments pertain to a TLB that includes a content addressable memory (CAM) with variable page size entries and a set associative memory with fixed page size entries. The CAM may include: a first set of logically contiguous entry locations, wherein the first set comprises a plurality of subsets, and each subset comprises logically contiguous entry locations for exclusive use of a corresponding virtual processing element (VPE); and a second set of logically contiguous entry locations, distinct from the first set, where the entry locations in the second set may be shared among available VPEs. The set associative memory may comprise a third set of logically contiguous entry locations shared among the available VPEs distinct from the first and second set of entry locations.
    Type: Application
    Filed: November 28, 2017
    Publication date: March 29, 2018
    Inventors: Ranjit J. Rozario, Sanjay Patel
  • Patent number: 9830275
    Abstract: Embodiments disclosed pertain to apparatuses, systems, and methods for Translation Lookaside Buffers (TLBs) that support virtualization and multi-threading. Disclosed embodiments pertain to a TLB that includes a content addressable memory (CAM) with variable page size entries and a set associative memory with fixed page size entries. The CAM may include: a first set of logically contiguous entry locations, wherein the first set comprises a plurality of subsets, and each subset comprises logically contiguous entry locations for exclusive use of a corresponding virtual processing element (VPE); and a second set of logically contiguous entry locations, distinct from the first set, where the entry locations in the second set may be shared among available VPEs. The set associative memory may comprise a third set of logically contiguous entry locations shared among the available VPEs distinct from the first and second set of entry locations.
    Type: Grant
    Filed: May 18, 2015
    Date of Patent: November 28, 2017
    Assignee: Imagination Technologies Limited
    Inventors: Ranjit J. Rozario, Sanjay Patel
  • Publication number: 20170293486
    Abstract: A system and method process atomic instructions. A processor system includes a load store unit (LSU), first and second registers, a memory interface, and a main memory. In response to a load link (LL) instruction, the LSU loads first data from memory into the first register and sets an LL bit (LLBIT) to indicate a sequence of atomic instructions is being executed. The LSU further loads second data from memory into the second register in response to a load (LD) instruction. The LSU places a value of the second register into the memory interface in response to a store conditional coupled (SCX) instruction. When the LLBIT is set and in response to a store (SC) instruction, the LSU places a value of the second register into the memory interface and commits the first and second register values in the memory interface into the main memory when the LLBIT is set.
    Type: Application
    Filed: April 7, 2016
    Publication date: October 12, 2017
    Inventors: Ranjit J. Rozario, Andrew F. Glew, Sanjay Patel, James Robinson, Sudhakar Ranganathan
  • Publication number: 20170293556
    Abstract: A system and method provide for a better way of managing a shared memory system. A multiprocessor system includes a first and second CPU, with each CPU having a private L1 cache. The system further includes a level 2 (L2) cache shared between the first CPU and the second CPU, and includes a memory coherency manager (CM) and an I/O device. The second CPU is configured to request ownership of a cache line in the L1 cache of the first CPU that is in a Modified state. Later, upon receiving a read discard command from the I/O device, the second CPU is configured to request the CM update the cache line from a Modified state to a Shared state.
    Type: Application
    Filed: April 7, 2016
    Publication date: October 12, 2017
    Inventor: Ranjit J. Rozario
  • Publication number: 20170293646
    Abstract: An apparatus, system, and method provide a way for tracking the age of items stored within a queue. An apparatus includes an item storage array and an array of age-tracking bits. The item storage array stores data of valid items stored in the queue. The array of age-tracking bits is associated with valid items stored in the queue. Age-tracking bits associated with a subset of items in the queue are set to a first value when the subset of items is older than other items in the queue. The younger items in the queue correspond to the age-tracking bits set to the first value. Other age-tracking bits associated with the subset of items in the queue are set to a second value when the subset of items is younger than other items in the queue. The older queue items correspond to the age-tracking bits set to the second value.
    Type: Application
    Filed: April 7, 2016
    Publication date: October 12, 2017
    Inventors: Ranjit J. Rozario, Sudhakar Ranganathan
  • Patent number: 9740557
    Abstract: In one aspect, a pipelined ECC-protected cache access method and apparatus provides that during a normal operating mode, for a given cache transaction, a tag comparison action and a data RAM read are performed speculatively in a time during which an ECC calculation occurs. If a correctable error occurs, the tag comparison action and data RAM are repeated and an error mode is entered. Subsequent transactions are processed by performing the ECC calculation, without concurrent speculative actions, and a tag comparison and read are performed using only the tag data available after the ECC calculation. A reset to normal mode is effected by detecting a gap between transactions that is sufficient to avoid a conflict for use of tag comparison circuitry for an earlier transaction having a repeated tag comparison and a later transaction having a speculative tag comparison.
    Type: Grant
    Filed: February 2, 2015
    Date of Patent: August 22, 2017
    Assignee: Imagination Technologies Limited
    Inventors: Ranjit J Rozario, Ranganathan Sudhakar
  • Patent number: 9740454
    Abstract: An integrated circuit implements a multistage processing pipeline, where control is passed in the pipeline with data to be processed according to the control. At least some of the different pipeline stages can be implemented by different circuits, being clocked at different frequencies. These frequencies may change dynamically during operation of the integrated circuit. Control and data to be processed according to such control can be offset from each other in the pipeline; e.g., control can precede data by a pre-set number of clock events. To cross a clock domain, control and data can be temporarily stored in respective FIFOs. Reading of control by the destination domain is delayed by a delay amount determined so that reading of control and data can be offset from each other by a minimum number of clock events of the destination domain clock, and control is read before data is available for reading.
    Type: Grant
    Filed: May 9, 2016
    Date of Patent: August 22, 2017
    Assignee: Imagination Technologies Limited
    Inventor: Ranjit J. Rozario
  • Publication number: 20170123792
    Abstract: A processor includes a register and a load store unit (LSU). The LSU loads data into the register from a memory. When in little endian mode, bytes from sequentially increasing memory addresses are loaded in order of corresponding sequentially increasing byte memory addresses from a first end (right end) of the register to a second end (left end) of the register. When in big endian mode, bytes from sequentially increasing memory addresses are loaded in order of corresponding sequentially increasing memory addresses from the second end (left end) of the register to the first end (right) of the register. Therefore, regardless of operating in little or big endian mode, the data in the register has its most significant byte on its left side and its least significant byte on its right side which simplifies the execution of SIMD instructions because the data is aligned the same for both endian modes.
    Type: Application
    Filed: November 3, 2015
    Publication date: May 4, 2017
    Inventors: Ranjit J. Rozario, Sudhakar Ranganathan
  • Publication number: 20160342523
    Abstract: Embodiments disclosed pertain to apparatuses, systems, and methods for Translation Lookaside Buffers (TLBs) that support virtualization and multi-threading. Disclosed embodiments pertain to a TLB that includes a content addressable memory (CAM) with variable page size entries and a set associative memory with fixed page size entries. The CAM may include: a first set of logically contiguous entry locations, wherein the first set comprises a plurality of subsets, and each subset comprises logically contiguous entry locations for exclusive use of a corresponding virtual processing element (VPE); and a second set of logically contiguous entry locations, distinct from the first set, where the entry locations in the second set may be shared among available VPEs. The set associative memory may comprise a third set of logically contiguous entry locations shared among the available VPEs distinct from the first and second set of entry locations.
    Type: Application
    Filed: May 18, 2015
    Publication date: November 24, 2016
    Inventors: Ranjit J. Rozario, Sanjay Patel
  • Publication number: 20160253151
    Abstract: An integrated circuit implements a multistage processing pipeline, where control is passed in the pipeline with data to be processed according to the control. At least some of the different pipeline stages can be implemented by different circuits, being clocked at different frequencies. These frequencies may change dynamically during operation of the integrated circuit. Control and data to be processed according to such control can be offset from each other in the pipeline; e.g., control can precede data by a pre-set number of clock events. To cross a clock domain, control and data can be temporarily stored in respective FIFOs. Reading of control by the destination domain is delayed by a delay amount determined so that reading of control and data can be offset from each other by a minimum number of clock events of the destination domain clock, and control is read before data is available for reading.
    Type: Application
    Filed: May 9, 2016
    Publication date: September 1, 2016
    Inventor: Ranjit J. Rozario