Patents by Inventor Era K. Nangia

Era K. Nangia has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11934265
    Abstract: Techniques are disclosed relating to memory error tracking and logging. In some embodiments, a memory cache controller circuitry is configured to track, using multiple circuit entries, numbers of detected correctable errors associated with multiple respective locations, and in response to detecting a threshold number of correctable errors for a particular location, generate a signal to the one or more processors that identifies the particular location. In some embodiments, the memory cache controller circuitry includes multiple circuit entries for tracking uncorrectable errors.
    Type: Grant
    Filed: June 1, 2022
    Date of Patent: March 19, 2024
    Assignee: Apple Inc.
    Inventors: Farid Nemati, Steven R. Hutsell, Derek R. Kumar, Bernard J. Semeria, James Vash, Era K. Nangia, Gregory S. Mathews
  • Publication number: 20230305924
    Abstract: Techniques are disclosed relating to memory error tracking and logging. In some embodiments, a memory cache controller circuitry is configured to track, using multiple circuit entries, numbers of detected correctable errors associated with multiple respective locations, and in response to detecting a threshold number of correctable errors for a particular location, generate a signal to the one or more processors that identifies the particular location. In some embodiments, the memory cache controller circuitry includes multiple circuit entries for tracking uncorrectable errors.
    Type: Application
    Filed: June 1, 2022
    Publication date: September 28, 2023
    Inventors: Farid Nemati, Steven R. Hutsell, Derek R. Kumar, Bernard J. Semeria, James Vash, Era K. Nangia, Gregory S. Mathews
  • Patent number: 10768939
    Abstract: A load/store unit for a processor, and applications thereof. In an embodiment, the load/store unit includes a load/store queue configured to store information and data associated with a particular class of instructions. Data stored in the load/store queue can be bypassed to dependent instructions. When an instruction belonging to The particular class of instructions graduates and the instruction is associated with a cache miss, control logic causes a pointer to be stored in a load/store graduation buffer that points to an entry in the load/store queue associated with the instruction. The load/store graduation buffer ensures that graduated instructions access a shared resource of the load/store unit in program order.
    Type: Grant
    Filed: March 27, 2019
    Date of Patent: September 8, 2020
    Assignee: ARM Finance Overseas Limited
    Inventors: Meng-Bing Yu, Era K. Nangia, Michael Ni
  • Patent number: 10430340
    Abstract: A virtual hint based data cache way prediction scheme, and applications thereof. In an embodiment, a processor retrieves data from a data cache based on a virtual hint value or an alias way prediction value and forwards the data to dependent instructions before a physical address for the data is available. After the physical address is available, the physical address is compared to a physical address tag value for the forwarded data to verify that the forwarded data is the correct data. If the forwarded data is the correct data, a hit signal is generated. If the forwarded data is not the correct data, a miss signal is generated. Any instructions that operate on incorrect data are invalidated and/or replayed.
    Type: Grant
    Filed: March 23, 2017
    Date of Patent: October 1, 2019
    Assignee: ARM Finance Overseas Limited
    Inventors: Meng-Bing Yu, Era K. Nangia, Michael Ni
  • Publication number: 20190220283
    Abstract: A load/store unit for a processor, and applications thereof In an embodiment, the load/store unit includes a load/store queue configured to store information and data associated with a particular class of instructions. Data stored in the load/store queue can be bypassed to dependent instructions. When an instruction belonging to The particular class of instructions graduates and the instruction is associated with a cache miss, control logic causes a pointer to be stored in a load/store graduation buffer that points to an entry in the load/store queue associated with the instruction. The load/store graduation buffer ensures that graduated instructions access a shared resource of the load/store unit in program order.
    Type: Application
    Filed: March 27, 2019
    Publication date: July 18, 2019
    Inventors: Meng-Bing Yu, Era K. Nangia, Michael Ni
  • Patent number: 10268481
    Abstract: A load/store unit for a processor, and applications thereof. In an embodiment, the load/store unit includes a load/store queue configured to store information and data associated with a particular class of instructions. Data stored in the load/store queue can be bypassed to dependent instructions. When an instruction belonging to The particular class of instructions graduates and the instruction is associated with a cache miss, control logic causes a pointer to be stored in a load/store graduation buffer that points to an entry in the load/store queue associated with the instruction. The load/store graduation buffer ensures that graduated instructions access a shared resource of the load/store unit in program order.
    Type: Grant
    Filed: March 12, 2018
    Date of Patent: April 23, 2019
    Assignee: ARM Finance Overseas Limited
    Inventors: Meng-Bing Yu, Era K. Nangia, Michael Ni
  • Publication number: 20180203702
    Abstract: A load/store unit for a processor, and applications thereof. In an embodiment, the load/store unit includes a load/store queue configured to store information and data associated with a particular class of instructions. Data stored in the load/store queue can be bypassed to dependent instructions. When an instruction belonging to The particular class of instructions graduates and the instruction is associated with a cache miss, control logic causes a pointer to be stored in a load/store graduation buffer that points to an entry in the load/store queue associated with the instruction. The load/store graduation buffer ensures that graduated instructions access a shared resource of the load/store unit in program order.
    Type: Application
    Filed: March 12, 2018
    Publication date: July 19, 2018
    Inventors: Meng-Bing Yu, Era K. Nangia, Michael Ni
  • Patent number: 9946547
    Abstract: A load/store unit for a processor, and applications thereof. In an embodiment, the load/store unit includes a load/store queue configured to store information and data associated with a particular class of instructions. Data stored in the load/store queue can be bypassed to dependent instructions. When an instruction belonging to the particular class of instructions graduates and the instruction is associated with a cache miss, control logic causes a pointer to be stored in a load/store graduation buffer that points to an entry in the load/store queue associated with the instruction. The load/store graduation buffer ensures that graduated instructions access a shared resource of the load/store unit in program order.
    Type: Grant
    Filed: September 29, 2006
    Date of Patent: April 17, 2018
    Assignee: ARM Finance Overseas Limited
    Inventors: Meng-Bing Yu, Era K. Nangia, Michael Ni
  • Publication number: 20170192894
    Abstract: A virtual hint based data cache way prediction scheme, and applications thereof. In an embodiment, a processor retrieves data from a data cache based on a virtual hint value or an alias way prediction value and forwards the data to dependent instructions before a physical address for the data is available. After the physical address is available, the physical address is compared to a physical address tag value for the forwarded data to verify that the forwarded data is the correct data. If the forwarded data is the correct data, a hit signal is generated. If the forwarded data is not the correct data, a miss signal is generated. Any instructions that operate on incorrect data are invalidated and/or replayed.
    Type: Application
    Filed: March 23, 2017
    Publication date: July 6, 2017
    Inventors: Meng-Bing Yu, Era K. Nangia, Michael Ni
  • Patent number: 9632939
    Abstract: A virtual hint based data cache way prediction scheme, and applications thereof. In an embodiment, a processor retrieves data from a data cache based on a virtual hint value or an alias way prediction value and forwards the data to dependent instructions before a physical address for the data is available. After the physical address is available, the physical address is compared to a physical address tag value for the forwarded data to verify that the forwarded data is the correct data. If the forwarded data is the correct data, a hit signal is generated. If the forwarded data is not the correct data, a miss signal is generated. Any instructions that operate on incorrect data are invalidated and/or replayed.
    Type: Grant
    Filed: June 25, 2015
    Date of Patent: April 25, 2017
    Assignee: ARM Finance Overseas Limited
    Inventors: Meng-Bing Yu, Era K. Nangia, Michael Ni, Karagada Ramarao Kishore
  • Publication number: 20150293853
    Abstract: A virtual hint based data cache way prediction scheme, and applications thereof. In an embodiment, a processor retrieves data from a data cache based on a virtual hint value or an alias way prediction value and forwards the data to dependent instructions before a physical address for the data is available. After the physical address is available, the physical address is compared to a physical address tag value for the forwarded data to verify that the forwarded data is the correct data. If the forwarded data is the correct data, a hit signal is generated. If the forwarded data is not the correct data, a miss signal is generated. Any instructions that operate on incorrect data are invalidated and/or replayed.
    Type: Application
    Filed: June 25, 2015
    Publication date: October 15, 2015
    Inventors: Meng-Bing Yu, Era K. Nangia, Michael Ni, Vidya Rajagopalan
  • Patent number: 9092343
    Abstract: A virtual hint based data cache way prediction scheme, and applications thereof. In an embodiment, a processor retrieves data from a data cache based on a virtual hint value or an alias way prediction value and forwards the data to dependent instructions before a physical address for the data is available. After the physical address is available, the physical address is compared to a physical address tag value for the forwarded data to verify that the forwarded data is the correct data. If the forwarded data is the correct data, a hit signal is generated. If the forwarded data is not the correct data, a miss signal is generated. Any instructions that operate on incorrect data are invalidated and/or replayed.
    Type: Grant
    Filed: September 21, 2009
    Date of Patent: July 28, 2015
    Assignee: ARM Finance Overseas Limited
    Inventors: Meng-Bing Yu, Era K. Nangia, Michael Ni, Vidya Rajagopalan
  • Patent number: 7769958
    Abstract: Livelocks are prevented in multiple core processors by canceling data access requests upon determining that they conflict with other data access requests. A requesting processor core sends a data access request potentially causing livelock to a cache coherency manager. A cache coherency manager receives data access requests from multiple processor. The cache coherency manager sends intervention messages to all of the processor cores in response to all data access requests that may cause livelock. Upon receiving an intervention message from the cache coherency manager, the processor core determines if the intervention message corresponds with any of its own pending data access requests. If the intervention message is associated with a data access request conflicting with one of its own pending data access requests, the processor core responds to the invention message by directing the cache coherency manager to cancel its own conflicting pending data access request.
    Type: Grant
    Filed: June 22, 2007
    Date of Patent: August 3, 2010
    Assignee: MIPS Technologies, Inc.
    Inventors: Ryan C. Kinter, Era K. Nangia
  • Publication number: 20100011166
    Abstract: A virtual hint based data cache way prediction scheme, and applications thereof. In an embodiment, a processor retrieves data from a data cache based on a virtual hint value or an alias way prediction value and forwards the data to dependent instructions before a physical address for the data is available. After the physical address is available, the physical address is compared to a physical address tag value for the forwarded data to verify that the forwarded data is the correct data. If the forwarded data is the correct data, a hit signal is generated. If the forwarded data is not the correct data, a miss signal is generated. Any instructions that operate on incorrect data are invalidated and/or replayed.
    Type: Application
    Filed: September 21, 2009
    Publication date: January 14, 2010
    Applicant: MIPS Technologies, Inc.
    Inventors: Meng-Bing YU, Era K. NANGIA, Michael NI, Vidya RAJAGOPALAN
  • Patent number: 7594079
    Abstract: A virtual hint based data cache way prediction scheme, and applications thereof. In an embodiment, a processor retrieves data from a data cache based on a virtual hint value or an alias way prediction value and forwards the data to dependent instructions before a physical address for the data is available. After the physical address is available, the physical address is compared to a physical address tag value for the forwarded data to verify that the forwarded data is the correct data. If the forwarded data is the correct data, a hit signal is generated. If the forwarded data is not the correct data, a miss signal is generated. Any instructions that operate on incorrect data are invalidated and/or replayed.
    Type: Grant
    Filed: October 11, 2006
    Date of Patent: September 22, 2009
    Assignee: MIPS Technologies, Inc.
    Inventors: Meng-Bing Yu, Era K. Nangia, Michael Ni, Vidya Rajagopalan
  • Publication number: 20080320231
    Abstract: Livelocks are prevented in multiple core processors by canceling data access requests upon determining that they conflict with other data access requests. A requesting processor core sends a data access request potentially causing livelock to a cache coherency manager. A cache coherency manager receives data access requests from multiple processor. The cache coherency manager sends intervention messages to all of the processor cores in response to all data access requests that may cause livelock. Upon receiving an intervention message from the cache coherency manager, the processor core determines if the intervention message corresponds with any of its own pending data access requests. If the intervention message is associated with a data access request conflicting with one of its own pending data access requests, the processor core responds to the invention message by directing the cache coherency manager to cancel its own conflicting pending data access request.
    Type: Application
    Filed: June 22, 2007
    Publication date: December 25, 2008
    Applicant: MIPS TECHNOLOGIES INC.
    Inventors: Ryan C. Kinter, Era K. Nangia
  • Publication number: 20080082794
    Abstract: A load/store unit for a processor, and applications thereof. In an embodiment, the load/store unit includes a load/store queue configured to store information and data associated with a particular class of instructions. Data stored in the load/store queue can be bypassed to dependent instructions. When an instruction belonging to the particular class of instructions graduates and the instruction is associated with a cache miss, control logic causes a pointer to be stored in a load/store graduation buffer that points to an entry in the load/store queue associated with the instruction. The load/store graduation buffer ensures that graduated instructions access a shared resource of the load/store unit in program order.
    Type: Application
    Filed: September 29, 2006
    Publication date: April 3, 2008
    Applicant: MIPS Technologies, Inc.
    Inventors: Meng-Bing Yu, Era K. Nangia, Michael Ni
  • Publication number: 20080082793
    Abstract: Apparatuses, systems, and methods for detecting and preventing write-after-write hazards, and applications thereof. In an embodiment, a load/store queue of a processor stores a first register destination value associated with a graduated load instruction. A graduation unit of the processor broadcasts a second register destination value associated with a graduating load instruction. Control logic coupled to the load/store queue and the graduation unit compares the first register destination value to the second register destination. If the first register destination value and the second register destination value match, the control logic prevents the graduated load instruction from altering an architectural state of the processor.
    Type: Application
    Filed: September 29, 2006
    Publication date: April 3, 2008
    Applicant: MIPS Technologies, Inc.
    Inventors: Meng-Bing Yu, Era K. Nangia, Michael Ni, Karagada Ramarao Kishore
  • Publication number: 20080082721
    Abstract: A virtual hint based data cache way prediction scheme, and applications thereof. In an embodiment, a processor retrieves data from a data cache based on a virtual hint value or an alias way prediction value and forwards the data to dependent instructions before a physical address for the data is available. After the physical address is available, the physical address is compared to a physical address tag value for the forwarded data to verify that the forwarded data is the correct data. If the forwarded data is the correct data, a hit signal is generated. If the forwarded data is not the correct data, a miss signal is generated. Any instructions that operate on incorrect data are invalidated and/or replayed.
    Type: Application
    Filed: October 11, 2006
    Publication date: April 3, 2008
    Applicant: MIPS Technologies, Inc.
    Inventors: Meng-Bing Yu, Era K. Nangia, Michael Ni, Vidya Rajagopalan
  • Patent number: 6883156
    Abstract: A method of designing a circuit includes annotating relative positions of instantiated hierarchical macro cells, which include two or more instantiated standard cells. The relative positions of individual instantiated standard cells may also be annotated. Relative positions of instantiated hierarchical macro cells and individual instantiated standard cells may be altered to form a more compact standard cell configuration. Pin positions may also be annotated by relative position. Relative pin positions may be altered to promote dense signal line routing within the standard cell design. The relative positions of the instantiated hierarchical macro cells and individual instantiated standard cells are converted to absolute grid position locations to form a grid assigned circuit. The grid assigned circuit is then routed.
    Type: Grant
    Filed: May 31, 2002
    Date of Patent: April 19, 2005
    Assignee: MIPS Technologies, Inc.
    Inventors: Alex Khainson, Donald C. Ramsey, Jr., Lew Chua-Eoan, Era K. Nangia