Patents by Inventor Chyi-Chang Miao

Chyi-Chang Miao has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11816036
    Abstract: Method and system for performing data movement operations is described herein. One embodiment of a method includes: storing data for a first memory address in a cache line of a memory of a first processing unit, the cache line associated with a coherency state indicating that the memory has sole ownership of the cache line; decoding an instruction for execution by a second processing unit, the instruction comprising a source data operand specifying the first memory address and a destination operand specifying a memory location in the second processing unit; and responsive to executing the decoded instruction, copying data from the cache line of the memory of the first processing unit as identified by the first memory address, to the memory location of the second processing unit, wherein responsive to the copy, the cache line is to remain in the memory and the coherency state is to remain unchanged.
    Type: Grant
    Filed: May 6, 2022
    Date of Patent: November 14, 2023
    Assignee: Intel Corporation
    Inventors: Anil Vasudevan, Venkata Krishnan, Andrew J. Herdrich, Ren Wang, Robert G. Blankenship, Vedaraman Geetha, Shrikant M. Shah, Marshall A. Millier, Raanan Sade, Binh Q. Pham, Olivier Serres, Chyi-Chang Miao, Christopher B. Wilkerson
  • Publication number: 20220261351
    Abstract: Method and system for performing data movement operations is described herein. One embodiment of a method includes: storing data for a first memory address in a cache line of a memory of a first processing unit, the cache line associated with a coherency state indicating that the memory has sole ownership of the cache line; decoding an instruction for execution by a second processing unit, the instruction comprising a source data operand specifying the first memory address and a destination operand specifying a memory location in the second processing unit; and responsive to executing the decoded instruction, copying data from the cache line of the memory of the first processing unit as identified by the first memory address, to the memory location of the second processing unit, wherein responsive to the copy, the cache line is to remain in the memory and the coherency state is to remain unchanged.
    Type: Application
    Filed: May 6, 2022
    Publication date: August 18, 2022
    Applicant: Intel Corporation
    Inventors: Anil Vasudevan, Venkata Krishnan, Andrew J. Herdrich, Ren Wang, Robert G. Blankenship, Vedaraman Geetha, Shrikant M. Shah, Marshall A. Millier, Raanan Sade, Binh Q. Pham, Olivier Serres, Chyi-Chang Miao, Christopher B. Wilkerson
  • Patent number: 11327894
    Abstract: Method and system for performing data movement operations is described herein. One embodiment of a method includes: storing data for a first memory address in a cache line of a memory of a first processing unit, the cache line associated with a coherency state indicating that the memory has sole ownership of the cache line; decoding an instruction for execution by a second processing unit, the instruction comprising a source data operand specifying the first memory address and a destination operand specifying a memory location in the second processing unit; and responsive to executing the decoded instruction, copying data from the cache line of the memory of the first processing unit as identified by the first memory address, to the memory location of the second processing unit, wherein responsive to the copy, the cache line is to remain in the memory and the coherency state is to remain unchanged.
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: May 10, 2022
    Assignee: Intel Corporation
    Inventors: Anil Vasudevan, Venkata Krishnan, Andrew J. Herdrich, Ren Wang, Robert G. Blankenship, Vedaraman Geetha, Shrikant M. Shah, Marshall A. Millier, Raanan Sade, Binh Q. Pham, Olivier Serres, Chyi-Chang Miao, Christopher B. Wilkerson
  • Publication number: 20210049102
    Abstract: Method and system for performing data movement operations is described herein. One embodiment of a method includes: storing data for a first memory address in a cache line of a memory of a first processing unit, the cache line associated with a coherency state indicating that the memory has sole ownership of the cache line; decoding an instruction for execution by a second processing unit, the instruction comprising a source data operand specifying the first memory address and a destination operand specifying a memory location in the second processing unit; and responsive to executing the decoded instruction, copying data from the cache line of the memory of the first processing unit as identified by the first memory address, to the memory location of the second processing unit, wherein responsive to the copy, the cache line is to remain in the memory and the coherency state is to remain unchanged.
    Type: Application
    Filed: March 30, 2020
    Publication date: February 18, 2021
    Applicant: Intel Corporation
    Inventors: Anil Vasudevan, Venkata Krishnan, Andrew J. Herdrich, Ren Wang, Robert G. Blankenship, Vedaraman Geetha, Shrikant M. Shah, Marshall A. Millier, Raanan Sade, Binh Q. Pham, Olivier Serres, Chyi-Chang Miao, Christopher B. Wilkerson
  • Patent number: 10705962
    Abstract: Embodiment of this disclosure provides a mechanism to use a portion of an inactive processing element's private cache as an extended last-level cache storage space to adaptively adjust the size of shared cache. In one embodiment, a processing device is provided. The processing device comprising a cache controller is to identify a cache line to evict from a shared cache. An inactive processing core is selected by the cache controller from a plurality of processing cores associated with the shared cache. Then, a private cache of the inactive processing core is notified of an identifier of a cache line associated with the shared cache. Thereupon, the cache line is evicted from the shared cache to install in the private cache.
    Type: Grant
    Filed: December 21, 2017
    Date of Patent: July 7, 2020
    Assignee: Intel Corporation
    Inventors: Carl J. Beckmann, Robert G. Blankenship, Chyi-Chang Miao, Chitra Natarajan, Anthony-Trung D. Nguyen
  • Patent number: 10606755
    Abstract: Method and system for performing data movement operations is described herein. One embodiment of a method includes: storing data for a first memory address in a cache line of a memory of a first processing unit, the cache line associated with a coherency state indicating that the memory has sole ownership of the cache line; decoding an instruction for execution by a second processing unit, the instruction comprising a source data operand specifying the first memory address and a destination operand specifying a memory location in the second processing unit; and responsive to executing the decoded instruction, copying data from the cache line of the memory of the first processing unit as identified by the first memory address, to the memory location of the second processing unit, wherein responsive to the copy, the cache line is to remain in the memory and the coherency state is to remain unchanged.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: March 31, 2020
    Assignee: Intel Corporation
    Inventors: Anil Vasudevan, Venkata Krishnan, Andrew J. Herdrich, Ren Wang, Robert G. Blankenship, Vedaraman Geetha, Shrikant M. Shah, Marshall A. Millier, Raanan Sade, Binh Q. Pham, Olivier Serres, Chyi-Chang Miao, Christopher B. Wilkerson
  • Patent number: 10606750
    Abstract: A computing system comprises one or more cores. Each core comprises a processor. In some implementations, each processor is coupled to a communication network among the cores. In some implementations, a switch in each core includes switching circuitry to forward data received over data paths from other cores to the processor and to switches of other cores, and to forward data received from the processor to switches of other cores.
    Type: Grant
    Filed: April 11, 2017
    Date of Patent: March 31, 2020
    Assignee: Mallanox Technologies Ltd.
    Inventors: Matthew Mattina, Chyi-Chang Miao
  • Patent number: 10579524
    Abstract: A computing system comprises one or more cores. Each core comprises a processor. In some implementations, each processor is coupled to a communication network among the cores. In some implementations, a switch in each core includes switching circuitry to forward data received over data paths from other cores to the processor and to switches of other cores, and to forward data received from the processor to switches of other cores.
    Type: Grant
    Filed: April 11, 2017
    Date of Patent: March 3, 2020
    Assignee: Mellanox Technologies Ltd.
    Inventors: Matthew Mattina, Chyi-Chang Miao
  • Patent number: 10387332
    Abstract: A computing system comprises one or more cores. Each core comprises a processor. In some implementations, each processor is coupled to a communication network among the cores. In some implementations, a switch in each core includes switching circuitry to forward data received over data paths from other cores to the processor and to switches of other cores, and to forward data received from the processor to switches of other cores.
    Type: Grant
    Filed: April 11, 2017
    Date of Patent: August 20, 2019
    Assignee: Mellanox Technologies Ltd.
    Inventors: Christopher D. Metcalf, Bruce Edwards, Anant Agarwal, Chyi-Chang Miao, Patrick Robert Griffin
  • Publication number: 20190196968
    Abstract: Embodiment of this disclosure provides a mechanism to use a portion of an inactive processing element's private cache as an extended last-level cache storage space to adaptively adjust the size of shared cache. In one embodiment, a processing device is provided. The processing device comprising a cache controller is to identify a cache line to evict from a shared cache. An inactive processing core is selected by the cache controller from a plurality of processing cores associated with the shared cache. Then, a private cache of the inactive processing core is notified of an identifier of a cache line associated with the shared cache. Thereupon, the cache line is evicted from the shared cache to install in the private cache.
    Type: Application
    Filed: December 21, 2017
    Publication date: June 27, 2019
    Inventors: Carl J. Beckmann, Robert G. Blankenship, Chyi-Chang Miao, Chitra Natarajan, Anthony-Trung D. Nguyen
  • Patent number: 10210092
    Abstract: Managing data in a computing system comprising one or more cores includes: providing a cache in each of one or more of the cores that includes multiple storage locations; storing data of a first type of multiple types of data in a selected storage location of a first cache of a first core that is selected according to status information associated with the first cache, and updating the status information; and storing data of a second type of the multiple types of data in a storage location within a subset of fewer than all of the storage locations of the first cache and managing the status information to ensure that subsequent data of the second type received by the first core for storage in the first cache is stored in the storage location within the subset.
    Type: Grant
    Filed: December 14, 2015
    Date of Patent: February 19, 2019
    Assignee: Mellanox Technologies, Ltd.
    Inventors: Chyi-Chang Miao, Christopher D. Metcalf, Ian Rudolf Bratt, Carl G. Ramey
  • Publication number: 20190044889
    Abstract: Particular embodiments described herein provide for an electronic device that can be configured to group two or more small packets that are to be communicated to a common destination, where the two or more packets are grouped together in a queue dedicated to temporarily store small packets that are to be communicated to the common destination, coalesce the small packets into a coalesced packet, where the coalesced packet is a network packet, and communicate the coalesced packet to the common destination. In an example, the size of a small packet is a packet that is smaller than the network packet. In another example, the size of a small packet is less than half the size of the network packet.
    Type: Application
    Filed: June 29, 2018
    Publication date: February 7, 2019
    Applicant: Intel Corporation
    Inventors: Olivier Sylvain Gerard Serres, Venkata S. Krishnan, Brian Leung, Chyi-Chang Miao, Lawrence Colm Stewart
  • Publication number: 20190004958
    Abstract: Method and system for performing data movement operations is described herein. One embodiment of a method includes: storing data for a first memory address in a cache line of a memory of a first processing unit, the cache line associated with a coherency state indicating that the memory has sole ownership of the cache line; decoding an instruction for execution by a second processing unit, the instruction comprising a source data operand specifying the first memory address and a destination operand specifying a memory location in the second processing unit; and responsive to executing the decoded instruction, copying data from the cache line of the memory of the first processing unit as identified by the first memory address, to the memory location of the second processing unit, wherein responsive to the copy, the cache line is to remain in the memory and the coherency state is to remain unchanged.
    Type: Application
    Filed: June 30, 2017
    Publication date: January 3, 2019
    Inventors: Anil Vasudevan, Venkata Krishnan, Andrew J. Herdrich, Ren Wang, Robert G. Blankenship, Vedaraman Geetha, Shrikant M. Shah, Marshall A. Millier, Raanan Sade, Binh Q. Pham, Olivier Serres, Chyi-Chang Miao, Christopher B. Wilkerson
  • Patent number: 10095543
    Abstract: A computing system comprises one or more cores. Each core comprises a processor. In some implementations, each processor is coupled to a communication network among the cores. In some implementations, a switch in each core includes switching circuitry to forward data received over data paths from other cores to the processor and to switches of other cores, and to forward data received from the processor to switches of other cores.
    Type: Grant
    Filed: May 23, 2014
    Date of Patent: October 9, 2018
    Assignee: Mellanox Technologies Ltd.
    Inventors: Patrick Robert Griffin, Mathew Hostetter, Anant Agarwal, Chyi-Chang Miao
  • Patent number: 9213652
    Abstract: Managing data in a computing system comprising one or more cores includes: providing a cache in each of one or more of the cores that includes multiple storage locations; storing data of a first type of multiple types of data in a selected storage location of a first cache of a first core that is selected according to status information associated with the first cache, and updating the status information; and storing data of a second type of the multiple types of data in a storage location within a subset of fewer than all of the storage locations of the first cache and managing the status information to ensure that subsequent data of the second type received by the first core for storage in the first cache is stored in the storage location within the subset.
    Type: Grant
    Filed: September 20, 2010
    Date of Patent: December 15, 2015
    Assignee: Tilera Corperation
    Inventors: Chyi-Chang Miao, Christopher D. Metcalf, Ian Rudolf Bratt, Carl G. Ramey
  • Patent number: 8738860
    Abstract: A computing system comprises one or more cores. Each core comprises a processor. In some implementations, each processor is coupled to a communication network among the cores. In some implementations, a switch in each core includes switching circuitry to forward data received over data paths from other cores to the processor and to switches of other cores, and to forward data received from the processor to switches of other cores.
    Type: Grant
    Filed: October 25, 2011
    Date of Patent: May 27, 2014
    Assignee: Tilera Corporation
    Inventors: Patrick Robert Griffin, Mathew Hostetter, Anant Agarwal, Chyi-Chang Miao, Christopher D. Metcalf, Bruce Edwards, Carl G. Ramey, Mark B. Rosenbluth, David M. Wentzlaff, Christopher J. Jackson, Ben Harrison, Kenneth M. Steele, John Amann, Shane Bell, Richard Conlin, Kevin Joyce, Christine Deignan, Liewei Bao, Matthew Mattina, Ian Rudolf Bratt, Richard Schooler
  • Patent number: 8539155
    Abstract: Managing cache memories in a computing system comprising multiple cores includes: assigning home cache locations for portions of data stored among caches in a group of caches of respective cores; accessing a first one of the portions of the cached data by sending an access request to a first home core of that first one of the portions of cached data; tracking a history of access for the first one of the portions of cached data; determining whether the tracked history of access for the first one of the portions of cached data exceeds or meets a predetermined condition, and re-assigning a home cache location of the first one of the portions of cached data from the first home core to a second, different home core when the predetermined condition is met or exceeded.
    Type: Grant
    Filed: September 20, 2010
    Date of Patent: September 17, 2013
    Assignee: Tilera Corporation
    Inventors: Chyi-Chang Miao, Christopher D. Metcalf, Ian Rudolf Bratt, Carl G. Ramey
  • Patent number: 8521963
    Abstract: Managing data in a computing system comprising multiple cores includes: assigning a first set of data to caches within cores of a first subset of fewer than all of the cores in the computing system, and assigning a second set of data to caches within cores of a second subset of at least some remaining cores in the computing system not already assigned; and maintaining cache coherence among caches of respective cores in the first subset in response to data stored in at least one of the cores in the first subset being modified, and maintaining cache coherence among caches of respective cores in the second subset in response to data stored in at least one of the cores in the second subset being modified.
    Type: Grant
    Filed: September 20, 2010
    Date of Patent: August 27, 2013
    Assignee: Tilera Corporation
    Inventors: Chyi-Chang Miao, Christopher D. Metcalf, Ian Rudolf Bratt, Carl G. Ramey
  • Publication number: 20060026371
    Abstract: In one embodiment of the present invention, a method includes generating a first order vector corresponding to a first entry in an operation order queue that corresponds to a first memory operation, and preventing a subsequent memory operation from completing until the first memory operation completes. In such a method, the operation order queue may be a load queue or a store queue, for example. Similarly, an order vector may be generated for an entry of a first operation order queue based on entries in a second operation order queue. Further, such an entry may include a field to identify an entry in the second operation order queue. A merge buffer may be coupled to the first operation order queue and produce a signal when all prior writes become visible.
    Type: Application
    Filed: July 30, 2004
    Publication date: February 2, 2006
    Inventors: George Chrysos, Ugonna Echeruo, Chyi-Chang Miao, James Vash