Patents by Inventor Patrick Lu
Patrick Lu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12271308Abstract: Examples provide an application program interface or manner of negotiating locking or pinning or unlocking or unpinning of a cache region by which an application, software, or hardware. A cache region can be part of a level-1, level-2, lower or last level cache (LLC), or translation lookaside buffer (TLB) are locked (e.g., pinned) or unlocked (e.g., unpinned). A cache lock controller can respond to a request to lock or unlock a region of cache or TLB by indicating that the request is successful or not successful. If a request is not successful, the controller can provide feedback indicating one or more aspects of the request that are not permitted. The application, software, or hardware can submit another request, a modified request, based on the feedback to attempt to lock a portion of the cache or TLB.Type: GrantFiled: December 28, 2023Date of Patent: April 8, 2025Assignee: Intel CorporationInventors: Andrew J. Herdrich, Priya Autee, Abhishek Khade, Patrick Lu, Edwin Verplanke, Vivekananthan Sanjeepan
-
Patent number: 12235761Abstract: Examples provide an application program interface or manner of negotiating locking or pinning or unlocking or unpinning of a cache region by which an application, software, or hardware. A cache region can be part of a level-1, level-2, lower or last level cache (LLC), or translation lookaside buffer (TLB) are locked (e.g., pinned) or unlocked (e.g., unpinned). A cache lock controller can respond to a request to lock or unlock a region of cache or TLB by indicating that the request is successful or not successful. If a request is not successful, the controller can provide feedback indicating one or more aspects of the request that are not permitted. The application, software, or hardware can submit another request, a modified request, based on the feedback to attempt to lock a portion of the cache or TLB.Type: GrantFiled: July 17, 2019Date of Patent: February 25, 2025Assignee: Intel CorporationInventors: Andrew J. Herdrich, Priya Autee, Abhishek Khade, Patrick Lu, Edwin Verplanke, Vivekananthan Sanjeepan
-
Publication number: 20240232078Abstract: Examples provide an application program interface or manner of negotiating locking or pinning or unlocking or unpinning of a cache region by which an application, software, or hardware. A cache region can be part of a level-1, level-2, lower or last level cache (LLC), or translation lookaside buffer (TLB) are locked (e.g., pinned) or unlocked (e.g., unpinned). A cache lock controller can respond to a request to lock or unlock a region of cache or TLB by indicating that the request is successful or not successful. If a request is not successful, the controller can provide feedback indicating one or more aspects of the request that are not permitted. The application, software, or hardware can submit another request, a modified request, based on the feedback to attempt to lock a portion of the cache or TLB.Type: ApplicationFiled: December 28, 2023Publication date: July 11, 2024Inventors: Andrew J. HERDRICH, Priya AUTEE, Abhishek KHADE, Patrick LU, Edwin VERPLANKE, Vivekananthan SANJEEPAN
-
Patent number: 11886884Abstract: In an embodiment, a processor includes a branch prediction circuit and a plurality of processing engines. The branch prediction circuit is to: detect a coherence operation associated with a first memory address; identify a first branch instruction associated with the first memory address; and predict a direction for the identified branch instruction based on the detected coherence operation. Other embodiments are described and claimed.Type: GrantFiled: November 12, 2019Date of Patent: January 30, 2024Assignee: Intel CorporationInventors: Christopher Wilkerson, Binh Pham, Patrick Lu, Jared Warner Stark, IV
-
Publication number: 20230300989Abstract: A circuit board etching device for improving etching factor, comprising: a circuit board conveying device, which laterally conveys a circuit board; a circuit forming etching tank and a first etchant spraying unit, the first etchant spraying unit sprays a first etchant on the etching surface of the circuit board; and a circuit modification etching tank and a second etchant spraying unit, the second etchant spraying unit sprays a second etchant on the etching surface of the circuit board, and make the etching speed of the circuit modification etching tank slower than the etching speed of the circuit forming etching tank, whereby having the effect of improving the etching factor of the circuit boardType: ApplicationFiled: October 14, 2022Publication date: September 21, 2023Inventors: PATRICK LU, CHIH WEI LU
-
Publication number: 20230250432Abstract: Compositions and methods for treating hepatocellular carcinoma (HCC) using siRNA molecules are provided. The compositions advantageously are administered in nanoparticle form, where the nanoparticles also contain a histidine-lysine copolymer (“HKP”). In specific embodiments, the composition contains an siRNA molecule that targets TGF-?1, an siRNA molecule that targets Cox-2, and an HKP copolymer.Type: ApplicationFiled: July 18, 2022Publication date: August 10, 2023Inventors: Michael MOLYNEAUX, Patrick LU
-
Patent number: 10977036Abstract: An apparatus is described. The apparatus includes main memory control logic circuitry comprising prefetch intelligence logic circuitry. The prefetch intelligence circuitry to determine, from a read result of a load instruction, an address for a dependent load that is dependent on the read result and direct a read request for the dependent load to a main memory to fetch the dependent load's data.Type: GrantFiled: September 30, 2016Date of Patent: April 13, 2021Assignee: Intel CorporationInventors: Patrick Lu, Karthik Kumar, Thomas Willhalm, Francesc Guim Bernat, Martin P. Dimitrov
-
Patent number: 10949313Abstract: A network controller, including: a processor; and a resource permission engine to: provision a composite node including a processor and a first disaggregated compute resource (DCR) remote from the processor, the first DCR to access a target resource; determine that the first DCR has failed; provision a second DCR for the composite node, the second DCR to access the target resource; and instruct the target resource to revoke a permission for the first DCR and grant the permission to the second DCR.Type: GrantFiled: June 28, 2017Date of Patent: March 16, 2021Assignee: Intel CorporationInventors: Francesc Guim Bernat, Karthik Kumar, Susanne M. Balle, Daniel Rivas Barragan, Patrick Lu
-
Publication number: 20210042228Abstract: Examples provide a system that includes at least one processor; a cache; a memory; an interface to copy data from a received packet to the memory or the at least one cache; and controller to manage use of at least one region of the cache. In some examples, the controller is to: indicate availability of a cache region reservation feature; receive a request to reserve a region of the cache from a requester; and based on the requested region being permitted to be reserved by the requester, solely allow the requester to write data to at least a portion of the reserved region. In some examples, the controller is to write to a register to indicate availability of a cache region reservation feature. In some examples, the request to reserve a region of the cache from a requester comprises a specification of a number of sets, a number of ways, and a class of service.Type: ApplicationFiled: October 13, 2020Publication date: February 11, 2021Inventors: Andrew J. HERDRICH, Priya AUTEE, Abhishek KHADE, Patrick LU, Edwin VERPLANKE, Vedvyas SHANBHOGUE
-
Publication number: 20200401538Abstract: An integrated circuit includes technology for generating input/output (I/O) latency metrics. The integrated circuit includes a real-time clock (RTC), a read measurement register, and a read latency measurement module. The read latency measurement module includes control logic to perform operations comprising (a) in response to receipt of read responses that complete read requests associated with an I/O device, automatically calculating read latencies for the completed read requests, based at least in part on time measurements from the RTC for initiation and completion of the read requests; (b) automatically calculating an average read latency for the completed read requests, based at least in part on the calculated read latencies for the completed read requests; and (c) automatically updating the read measurement register to record the average read latency for the completed read requests. Other embodiments are described and claimed.Type: ApplicationFiled: June 19, 2019Publication date: December 24, 2020Inventors: Garrett Matthias Drown, Patrick Lu
-
Patent number: 10853283Abstract: An integrated circuit includes technology for generating input/output (I/O) latency metrics. The integrated circuit includes a real-time clock (RTC), a read measurement register, and a read latency measurement module. The read latency measurement module includes control logic to perform operations comprising (a) in response to receipt of read responses that complete read requests associated with an I/O device, automatically calculating read latencies for the completed read requests, based at least in part on time measurements from the RTC for initiation and completion of the read requests; (b) automatically calculating an average read latency for the completed read requests, based at least in part on the calculated read latencies for the completed read requests; and (c) automatically updating the read measurement register to record the average read latency for the completed read requests. Other embodiments are described and claimed.Type: GrantFiled: June 19, 2019Date of Patent: December 1, 2020Assignee: Intel CorporationInventors: Garrett Matthias Drown, Patrick Lu
-
Patent number: 10768349Abstract: A reflective diffraction grating and a fabrication method are provided. The reflective diffraction grating includes a substrate, a UV-absorbing layer, a grating layer having a binary surface-relief pattern formed therein, and a conforming reflective layer. Advantageously, the UV-absorbing layer absorbs light at a UV recording wavelength to minimize reflection thereof by the substrate during holographic patterning at the UV recording wavelength.Type: GrantFiled: June 30, 2017Date of Patent: September 8, 2020Assignee: Lumentum Operations LLCInventors: John Michael Miller, Hery Djie, Patrick Lu, Xiaowei Guo, Qinghong Du, Eddie Chiu, Chester Murley
-
Patent number: 10621097Abstract: Devices and systems having memory-side adaptive prefetch decision-making, including associated methods, are disclosed and described. Adaptive information can be provided to memory-side controller and prefetch components that allow such memory-side components to prefetch data in a manner that is adaptive with respect to a particular read memory request or to a thread performing read memory requests.Type: GrantFiled: June 30, 2017Date of Patent: April 14, 2020Assignee: Intel CorporationInventors: Karthik Kumar, Thomas Willhalm, Patrick Lu, Francesc Guim Bernat, Shrikant M. Shah
-
Patent number: 10613999Abstract: Techniques and mechanisms for providing a shared memory which spans an interconnect fabric coupled between compute nodes. In an embodiment, a field-programmable gate array (FPGA) of a first compute node requests access to a memory resource of another compute node, where the memory resource is registered as part of the shared memory. In a response to the request, the first FPGA receives data from a fabric interface which couples the first compute node to an interconnect fabric. Circuitry of the first FPGA performs an operation, based on the data, independent of any requirement that the data first be stored to a shared memory location which is at the first compute node. In another embodiment, the fabric interface includes a cache agent to provide cache data and to provide cache coherency with one or more other compute nodes.Type: GrantFiled: January 12, 2018Date of Patent: April 7, 2020Assignee: Intel CorporationInventors: Francesc Guim Bernat, Thomas Willhalm, Karthik Kumar, Daniel Rivas Barragan, Patrick Lu
-
Publication number: 20200081718Abstract: In an embodiment, a processor includes a branch prediction circuit and a plurality of processing engines. The branch prediction circuit is to: detect a coherence operation associated with a first memory address; identify a first branch instruction associated with the first memory address; and predict a direction for the identified branch instruction based on the detected coherence operation. Other embodiments are described and claimed.Type: ApplicationFiled: November 12, 2019Publication date: March 12, 2020Inventors: Christopher Wilkerson, Binh Pham, Patrick Lu, Jared Warner Stark, IV
-
Patent number: 10558574Abstract: There is disclosed in one example a computing apparatus, including: a cache; a caching agent (CA); an integrated input/output (IIO) block to provide a cache coherent interface to a peripheral device at a first speed; a core configured to poll an address within the cache via the CA, wherein the address is to receive incoming data from the peripheral device via the IIO, and wherein the core is capable of polling the address at a second speed substantially greater than the first speed; and a hardware uncore agent configured to: identify a collision between the core and the IIO including determining that the core is polling the address at a rate that is determined to interfere with access to the address by the IIO; and throttle the core's access to the address.Type: GrantFiled: May 30, 2018Date of Patent: February 11, 2020Assignee: Intel CorporationInventors: Abhishek Khade, Patrick Lu, Francesc Guim Bernat
-
Patent number: 10521236Abstract: In an embodiment, a processor includes a branch prediction circuit and a plurality of processing engines. The branch prediction circuit is to: detect a coherence operation associated with a first memory address; identify a first branch instruction associated with the first memory address; and predict a direction for the identified branch instruction based on the detected coherence operation. Other embodiments are described and claimed.Type: GrantFiled: March 29, 2018Date of Patent: December 31, 2019Assignee: Intel CorporationInventors: Christopher Wilkerson, Binh Pham, Patrick Lu, Jared Warner Stark, IV
-
Publication number: 20190340123Abstract: Examples provide an application program interface or manner of negotiating locking or pinning or unlocking or unpinning of a cache region by which an application, software, or hardware. A cache region can be part of a level-1, level-2, lower or last level cache (LLC), or translation lookaside buffer (TLB) are locked (e.g., pinned) or unlocked (e.g., unpinned). A cache lock controller can respond to a request to lock or unlock a region of cache or TLB by indicating that the request is successful or not successful. If a request is not successful, the controller can provide feedback indicating one or more aspects of the request that are not permitted. The application, software, or hardware can submit another request, a modified request, based on the feedback to attempt to lock a portion of the cache or TLB.Type: ApplicationFiled: July 17, 2019Publication date: November 7, 2019Inventors: Andrew J. HERDRICH, Priya AUTEE, Abhishek KHADE, Patrick LU, Edwin VERPLANKE, Vivekananthan SANJEEPAN
-
Publication number: 20190303162Abstract: In an embodiment, a processor includes a branch prediction circuit and a plurality of processing engines. The branch prediction circuit is to: detect a coherence operation associated with a first memory address; identify a first branch instruction associated with the first memory address; and predict a direction for the identified branch instruction based on the detected coherence operation. Other embodiments are described and claimed.Type: ApplicationFiled: March 29, 2018Publication date: October 3, 2019Inventors: Christopher Wilkerson, Binh Pham, Patrick Lu, Jared Warner Stark, IV
-
Publication number: 20190250916Abstract: An apparatus is described. The apparatus includes main memory control logic circuitry comprising prefetch intelligence logic circuitry. The prefetch intelligence circuitry to determine, from a read result of a load instruction, an address for a dependent load that is dependent on the read result and direct a read request for the dependent load to a main memory to fetch the dependent load's data.Type: ApplicationFiled: September 30, 2016Publication date: August 15, 2019Inventors: Patrick LU, Karthik KUMAR, Thomas WILLHALM, Francesc GUIM BERNAT, Martin P. DIMITROV