Patents by Inventor Meng-Bing Yu
Meng-Bing Yu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10768939Abstract: A load/store unit for a processor, and applications thereof. In an embodiment, the load/store unit includes a load/store queue configured to store information and data associated with a particular class of instructions. Data stored in the load/store queue can be bypassed to dependent instructions. When an instruction belonging to The particular class of instructions graduates and the instruction is associated with a cache miss, control logic causes a pointer to be stored in a load/store graduation buffer that points to an entry in the load/store queue associated with the instruction. The load/store graduation buffer ensures that graduated instructions access a shared resource of the load/store unit in program order.Type: GrantFiled: March 27, 2019Date of Patent: September 8, 2020Assignee: ARM Finance Overseas LimitedInventors: Meng-Bing Yu, Era K. Nangia, Michael Ni
-
Patent number: 10430340Abstract: A virtual hint based data cache way prediction scheme, and applications thereof. In an embodiment, a processor retrieves data from a data cache based on a virtual hint value or an alias way prediction value and forwards the data to dependent instructions before a physical address for the data is available. After the physical address is available, the physical address is compared to a physical address tag value for the forwarded data to verify that the forwarded data is the correct data. If the forwarded data is the correct data, a hit signal is generated. If the forwarded data is not the correct data, a miss signal is generated. Any instructions that operate on incorrect data are invalidated and/or replayed.Type: GrantFiled: March 23, 2017Date of Patent: October 1, 2019Assignee: ARM Finance Overseas LimitedInventors: Meng-Bing Yu, Era K. Nangia, Michael Ni
-
Publication number: 20190220283Abstract: A load/store unit for a processor, and applications thereof In an embodiment, the load/store unit includes a load/store queue configured to store information and data associated with a particular class of instructions. Data stored in the load/store queue can be bypassed to dependent instructions. When an instruction belonging to The particular class of instructions graduates and the instruction is associated with a cache miss, control logic causes a pointer to be stored in a load/store graduation buffer that points to an entry in the load/store queue associated with the instruction. The load/store graduation buffer ensures that graduated instructions access a shared resource of the load/store unit in program order.Type: ApplicationFiled: March 27, 2019Publication date: July 18, 2019Inventors: Meng-Bing Yu, Era K. Nangia, Michael Ni
-
Patent number: 10268481Abstract: A load/store unit for a processor, and applications thereof. In an embodiment, the load/store unit includes a load/store queue configured to store information and data associated with a particular class of instructions. Data stored in the load/store queue can be bypassed to dependent instructions. When an instruction belonging to The particular class of instructions graduates and the instruction is associated with a cache miss, control logic causes a pointer to be stored in a load/store graduation buffer that points to an entry in the load/store queue associated with the instruction. The load/store graduation buffer ensures that graduated instructions access a shared resource of the load/store unit in program order.Type: GrantFiled: March 12, 2018Date of Patent: April 23, 2019Assignee: ARM Finance Overseas LimitedInventors: Meng-Bing Yu, Era K. Nangia, Michael Ni
-
Publication number: 20180203702Abstract: A load/store unit for a processor, and applications thereof. In an embodiment, the load/store unit includes a load/store queue configured to store information and data associated with a particular class of instructions. Data stored in the load/store queue can be bypassed to dependent instructions. When an instruction belonging to The particular class of instructions graduates and the instruction is associated with a cache miss, control logic causes a pointer to be stored in a load/store graduation buffer that points to an entry in the load/store queue associated with the instruction. The load/store graduation buffer ensures that graduated instructions access a shared resource of the load/store unit in program order.Type: ApplicationFiled: March 12, 2018Publication date: July 19, 2018Inventors: Meng-Bing Yu, Era K. Nangia, Michael Ni
-
Patent number: 10019381Abstract: In one embodiment, a microprocessor is provided. The microprocessor includes a cache that is controlled by a cache controller. The cache controller is configured to replace cachelines in the cache based on a replacement scheme that prioritizes the replacement of cachelines that are less likely to cause roll back of a transaction of the microprocessor.Type: GrantFiled: May 1, 2012Date of Patent: July 10, 2018Assignee: Nvidia CorporationInventor: Meng-Bing Yu
-
Patent number: 10019373Abstract: A memory management method includes: checking shared virtual memory (SVM) support ability of at least one device participating in data access of a buffer; referring to a checking result to adaptively select an SVM mode; and allocating the buffer in a physical memory region of a memory device, and configuring the buffer to operate in the selected SVM mode.Type: GrantFiled: August 23, 2015Date of Patent: July 10, 2018Assignee: MEDIATEK INC.Inventors: Dz-Ching Ju, Meng-Bing Yu, Yun-Ching Li
-
Patent number: 9946547Abstract: A load/store unit for a processor, and applications thereof. In an embodiment, the load/store unit includes a load/store queue configured to store information and data associated with a particular class of instructions. Data stored in the load/store queue can be bypassed to dependent instructions. When an instruction belonging to the particular class of instructions graduates and the instruction is associated with a cache miss, control logic causes a pointer to be stored in a load/store graduation buffer that points to an entry in the load/store queue associated with the instruction. The load/store graduation buffer ensures that graduated instructions access a shared resource of the load/store unit in program order.Type: GrantFiled: September 29, 2006Date of Patent: April 17, 2018Assignee: ARM Finance Overseas LimitedInventors: Meng-Bing Yu, Era K. Nangia, Michael Ni
-
Publication number: 20170192894Abstract: A virtual hint based data cache way prediction scheme, and applications thereof. In an embodiment, a processor retrieves data from a data cache based on a virtual hint value or an alias way prediction value and forwards the data to dependent instructions before a physical address for the data is available. After the physical address is available, the physical address is compared to a physical address tag value for the forwarded data to verify that the forwarded data is the correct data. If the forwarded data is the correct data, a hit signal is generated. If the forwarded data is not the correct data, a miss signal is generated. Any instructions that operate on incorrect data are invalidated and/or replayed.Type: ApplicationFiled: March 23, 2017Publication date: July 6, 2017Inventors: Meng-Bing Yu, Era K. Nangia, Michael Ni
-
Patent number: 9632939Abstract: A virtual hint based data cache way prediction scheme, and applications thereof. In an embodiment, a processor retrieves data from a data cache based on a virtual hint value or an alias way prediction value and forwards the data to dependent instructions before a physical address for the data is available. After the physical address is available, the physical address is compared to a physical address tag value for the forwarded data to verify that the forwarded data is the correct data. If the forwarded data is the correct data, a hit signal is generated. If the forwarded data is not the correct data, a miss signal is generated. Any instructions that operate on incorrect data are invalidated and/or replayed.Type: GrantFiled: June 25, 2015Date of Patent: April 25, 2017Assignee: ARM Finance Overseas LimitedInventors: Meng-Bing Yu, Era K. Nangia, Michael Ni, Karagada Ramarao Kishore
-
Publication number: 20160179686Abstract: A memory management method includes: checking shared virtual memory (SVM) support ability of at least one device participating in data access of a buffer; referring to a checking result to adaptively select an SVM mode; and allocating the buffer in a physical memory region of a memory device, and configuring the buffer to operate in the selected SVM mode.Type: ApplicationFiled: August 23, 2015Publication date: June 23, 2016Inventors: Dz-Ching Ju, Meng-Bing Yu, Yun-Ching Li
-
Publication number: 20150293853Abstract: A virtual hint based data cache way prediction scheme, and applications thereof. In an embodiment, a processor retrieves data from a data cache based on a virtual hint value or an alias way prediction value and forwards the data to dependent instructions before a physical address for the data is available. After the physical address is available, the physical address is compared to a physical address tag value for the forwarded data to verify that the forwarded data is the correct data. If the forwarded data is the correct data, a hit signal is generated. If the forwarded data is not the correct data, a miss signal is generated. Any instructions that operate on incorrect data are invalidated and/or replayed.Type: ApplicationFiled: June 25, 2015Publication date: October 15, 2015Inventors: Meng-Bing Yu, Era K. Nangia, Michael Ni, Vidya Rajagopalan
-
Patent number: 9092343Abstract: A virtual hint based data cache way prediction scheme, and applications thereof. In an embodiment, a processor retrieves data from a data cache based on a virtual hint value or an alias way prediction value and forwards the data to dependent instructions before a physical address for the data is available. After the physical address is available, the physical address is compared to a physical address tag value for the forwarded data to verify that the forwarded data is the correct data. If the forwarded data is the correct data, a hit signal is generated. If the forwarded data is not the correct data, a miss signal is generated. Any instructions that operate on incorrect data are invalidated and/or replayed.Type: GrantFiled: September 21, 2009Date of Patent: July 28, 2015Assignee: ARM Finance Overseas LimitedInventors: Meng-Bing Yu, Era K. Nangia, Michael Ni, Vidya Rajagopalan
-
Publication number: 20130297876Abstract: In one embodiment, a microprocessor is provided. The microprocessor includes a cache that is controlled by a cache controller. The cache controller is configured to replace cachelines in the cache based on a replacement scheme that prioritizes the replacement of cachelines that are less likely to cause roll back of a transaction of the microprocessor.Type: ApplicationFiled: May 1, 2012Publication date: November 7, 2013Applicant: NVIDIA CORPORATIONInventor: Meng-Bing Yu
-
Publication number: 20100011166Abstract: A virtual hint based data cache way prediction scheme, and applications thereof. In an embodiment, a processor retrieves data from a data cache based on a virtual hint value or an alias way prediction value and forwards the data to dependent instructions before a physical address for the data is available. After the physical address is available, the physical address is compared to a physical address tag value for the forwarded data to verify that the forwarded data is the correct data. If the forwarded data is the correct data, a hit signal is generated. If the forwarded data is not the correct data, a miss signal is generated. Any instructions that operate on incorrect data are invalidated and/or replayed.Type: ApplicationFiled: September 21, 2009Publication date: January 14, 2010Applicant: MIPS Technologies, Inc.Inventors: Meng-Bing YU, Era K. NANGIA, Michael NI, Vidya RAJAGOPALAN
-
Patent number: 7594079Abstract: A virtual hint based data cache way prediction scheme, and applications thereof. In an embodiment, a processor retrieves data from a data cache based on a virtual hint value or an alias way prediction value and forwards the data to dependent instructions before a physical address for the data is available. After the physical address is available, the physical address is compared to a physical address tag value for the forwarded data to verify that the forwarded data is the correct data. If the forwarded data is the correct data, a hit signal is generated. If the forwarded data is not the correct data, a miss signal is generated. Any instructions that operate on incorrect data are invalidated and/or replayed.Type: GrantFiled: October 11, 2006Date of Patent: September 22, 2009Assignee: MIPS Technologies, Inc.Inventors: Meng-Bing Yu, Era K. Nangia, Michael Ni, Vidya Rajagopalan
-
Publication number: 20080082793Abstract: Apparatuses, systems, and methods for detecting and preventing write-after-write hazards, and applications thereof. In an embodiment, a load/store queue of a processor stores a first register destination value associated with a graduated load instruction. A graduation unit of the processor broadcasts a second register destination value associated with a graduating load instruction. Control logic coupled to the load/store queue and the graduation unit compares the first register destination value to the second register destination. If the first register destination value and the second register destination value match, the control logic prevents the graduated load instruction from altering an architectural state of the processor.Type: ApplicationFiled: September 29, 2006Publication date: April 3, 2008Applicant: MIPS Technologies, Inc.Inventors: Meng-Bing Yu, Era K. Nangia, Michael Ni, Karagada Ramarao Kishore
-
Publication number: 20080082721Abstract: A virtual hint based data cache way prediction scheme, and applications thereof. In an embodiment, a processor retrieves data from a data cache based on a virtual hint value or an alias way prediction value and forwards the data to dependent instructions before a physical address for the data is available. After the physical address is available, the physical address is compared to a physical address tag value for the forwarded data to verify that the forwarded data is the correct data. If the forwarded data is the correct data, a hit signal is generated. If the forwarded data is not the correct data, a miss signal is generated. Any instructions that operate on incorrect data are invalidated and/or replayed.Type: ApplicationFiled: October 11, 2006Publication date: April 3, 2008Applicant: MIPS Technologies, Inc.Inventors: Meng-Bing Yu, Era K. Nangia, Michael Ni, Vidya Rajagopalan
-
Publication number: 20080082794Abstract: A load/store unit for a processor, and applications thereof. In an embodiment, the load/store unit includes a load/store queue configured to store information and data associated with a particular class of instructions. Data stored in the load/store queue can be bypassed to dependent instructions. When an instruction belonging to the particular class of instructions graduates and the instruction is associated with a cache miss, control logic causes a pointer to be stored in a load/store graduation buffer that points to an entry in the load/store queue associated with the instruction. The load/store graduation buffer ensures that graduated instructions access a shared resource of the load/store unit in program order.Type: ApplicationFiled: September 29, 2006Publication date: April 3, 2008Applicant: MIPS Technologies, Inc.Inventors: Meng-Bing Yu, Era K. Nangia, Michael Ni
-
Patent number: 5604454Abstract: An integrated circuit (20) includes multiple output buffers (30, 50, 70) which switch substantially simultaneously. The output buffers (30, 50, 70) are connected together via a common node (25). Before any one of the output buffers (30, 50, 70) actively drives its corresponding output node to an appropriate logic state, a coupling circuit (42) in the output buffer (30) evaluates whether the new logic state matches the old logic state. If the coupling circuit (42) determines that the logic states are different, then it couples the output node to the common node (25). With each output buffer in the group of multiple output buffers (30, 50, 70) functioning similarly, energy is conserved by using the charge stored in the low-going nodes to charge up the high-going nodes.Type: GrantFiled: September 29, 1995Date of Patent: February 18, 1997Assignee: Motorola Inc.Inventors: Jeffrey E. Maguire, Meng-Bing Yu