Patents by Inventor Vihar Soneji

Vihar Soneji has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12008375
    Abstract: A microprocessor includes a branch target buffer (BTB). Each BTB entry holds a tag based on at least a portion of a virtual address of a block of instructions previously fetched from a physically-indexed physically-tagged set associative instruction cache using a physical address that is a translation of the virtual address, a translated address bit portion of a set index of an instruction cache entry from which the instruction block was previously fetched, and a way number of the instruction cache entry from which the instruction block was previously fetched. In response to a BTB hit based on a fetch virtual address, the BTB provides a translated address bit portion of a predicted set index that is the translated address bit portion of the set index from the hit on BTB entry and a predicted way number that is the way number from the hit on BTB entry.
    Type: Grant
    Filed: June 8, 2022
    Date of Patent: June 11, 2024
    Assignee: Ventana Micro Systems Inc.
    Inventors: John G. Favor, Michael N. Michael, Vihar Soneji
  • Patent number: 11977893
    Abstract: An instruction fetch pipeline includes first, second, and third sub-pipelines that respectively include: a TLB that receives a fetch virtual address, a tag random access memory (RAM) of a physically-indexed physically-tagged set associative instruction cache that receives a predicted set index, and a data RAM that receives the predicted set index and a predicted way number that specifies a way of the entry from which a block of instructions was previously fetched. The predicted set index specifies the instruction cache set that includes the entry. The three sub-pipelines respectively initiate in parallel: a TLB access using the fetch virtual address to obtain a translation thereof into a fetch physical address that includes a tag, a tag RAM access using the predicted set index to read a set of tags, and a data RAM access using the predicted set index and the predicted way number to fetch the block of instructions.
    Type: Grant
    Filed: June 8, 2022
    Date of Patent: May 7, 2024
    Assignee: Ventana Micro Systems Inc.
    Inventors: John G. Favor, Michael N. Michael, Vihar Soneji
  • Patent number: 11880685
    Abstract: An instruction fetch pipeline includes first, second, and third sub-pipelines that respectively include: a TLB that receives a fetch virtual address, a tag random access memory (RAM) of a physically-indexed physically-tagged set associative instruction cache that receives a predicted set index, and a data RAM that receives the predicted set index and a predicted way number that specifies a way of the entry from which a block of instructions was previously fetched. The predicted set index specifies the instruction cache set that includes the entry. The three sub-pipelines respectively initiate in parallel: a TLB access using the fetch virtual address to obtain a translation thereof into a fetch physical address that includes a tag, a tag RAM access using the predicted set index to read a set of tags, and a data RAM access using the predicted set index and the predicted way number to fetch the block of instructions.
    Type: Grant
    Filed: June 8, 2022
    Date of Patent: January 23, 2024
    Assignee: Ventana Micro Systems Inc.
    Inventors: John G. Favor, Michael N. Michael, Vihar Soneji
  • Publication number: 20230401066
    Abstract: A dynamically-foldable instruction fetch pipeline receives a first fetch request that includes a fetch virtual address and includes first, second and third sub-pipelines that respectively include a translation lookaside buffer (TLB) that translates the fetch virtual address into a fetch physical address, a tag random access memory (RAM) of a physically-indexed physically-tagged set associative instruction cache that receives a set index that selects a set of tag RAM tags for comparison with a tag portion of the fetch physical address to determine a correct way of the instruction cache, and a data RAM of the instruction cache that receives the set index and a way number that together specify a data RAM entry from which to fetch an instruction block. When a control signal indicates a folded mode, the sub-pipelines operate in a parallel manner. When the control signal indicates a unfolded mode, the sub-pipelines operate in a sequential manner.
    Type: Application
    Filed: June 8, 2022
    Publication date: December 14, 2023
    Inventors: John G. Favor, Michael N. Michael, Vihar Soneji
  • Publication number: 20230401065
    Abstract: A microprocessor includes a branch target buffer (BTB). Each BTB entry holds a tag based on at least a portion of a virtual address of a block of instructions previously fetched from a physically-indexed physically-tagged set associative instruction cache using a physical address that is a translation of the virtual address, a translated address bit portion of a set index of an instruction cache entry from which the instruction block was previously fetched, and a way number of the instruction cache entry from which the instruction block was previously fetched. In response to a BTB hit based on a fetch virtual address, the BTB provides a translated address bit portion of a predicted set index that is the translated address bit portion of the set index from the hit on BTB entry and a predicted way number that is the way number from the hit on BTB entry.
    Type: Application
    Filed: June 8, 2022
    Publication date: December 14, 2023
    Inventors: John G. Favor, Michael N. Michael, Vihar Soneji
  • Publication number: 20230401063
    Abstract: An instruction fetch pipeline includes first, second, and third sub-pipelines that respectively include: a TLB that receives a fetch virtual address, a tag random access memory (RAM) of a physically-indexed physically-tagged set associative instruction cache that receives a predicted set index, and a data RAM that receives the predicted set index and a predicted way number that specifies a way of the entry from which a block of instructions was previously fetched. The predicted set index specifies the instruction cache set that includes the entry. The three sub-pipelines respectively initiate in parallel: a TLB access using the fetch virtual address to obtain a translation thereof into a fetch physical address that includes a tag, a tag RAM access using the predicted set index to read a set of tags, and a data RAM access using the predicted set index and the predicted way number to fetch the block of instructions.
    Type: Application
    Filed: June 8, 2022
    Publication date: December 14, 2023
    Inventors: John G. Favor, Michael N. Michael, Vihar Soneji