Patents by Inventor Ruihua Peng

Ruihua Peng has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11971834
    Abstract: A computing device is provided, including a plurality of memory devices, a plurality of direct memory access (DMA) controllers, and an on-chip interconnect. The on-chip interconnect may be configured to implement control logic to convey a read request from a primary DMA controller of the plurality of DMA controllers to a source memory device of the plurality of memory devices. The on-chip interconnect may be further configured to implement the control logic to convey a read response from the source memory device to the primary DMA controller and one or more secondary DMA controllers of the plurality of DMA controllers.
    Type: Grant
    Filed: March 13, 2023
    Date of Patent: April 30, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ruihua Peng, Monica Man Kay Tang, Xiaoling Xu
  • Publication number: 20240045719
    Abstract: A computing device, including a processor configured to perform data transfer scheduling for a hardware accelerator including a plurality of processing areas. Performing data transfer scheduling may include receiving a plurality of data transfer instructions that encode requests to transfer data to respective processing areas. Performing data transfer scheduling may further include identifying a plurality of transfer path conflicts between the data transfer instructions. Performing data transfer scheduling may further include sorting the data transfer instructions into a plurality of transfer instruction subsets. Within each transfer instruction subset, none of the data transfer instructions have transfer path conflicts. For each transfer instruction subset, performing data transfer scheduling may further include conveying the data transfer instructions included in that transfer instruction subset to the hardware accelerator.
    Type: Application
    Filed: October 19, 2023
    Publication date: February 8, 2024
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Monica Man Kay TANG, Ruihua PENG, Zhuo RUAN
  • Patent number: 11816502
    Abstract: A computing device, including a processor configured to perform data transfer scheduling for a hardware accelerator including a plurality of processing areas. Performing data transfer scheduling may include receiving a plurality of data transfer instructions that encode requests to transfer data to respective processing areas. Performing data transfer scheduling may further include identifying a plurality of transfer path conflicts between the data transfer instructions. Performing data transfer scheduling may further include sorting the data transfer instructions into a plurality of transfer instruction subsets. Within each transfer instruction subset, none of the data transfer instructions have transfer path conflicts. For each transfer instruction subset, performing data transfer scheduling may further include conveying the data transfer instructions included in that transfer instruction subset to the hardware accelerator.
    Type: Grant
    Filed: February 23, 2023
    Date of Patent: November 14, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Monica Man Kay Tang, Ruihua Peng, Zhuo Ruan
  • Publication number: 20230214342
    Abstract: A computing device is provided, including a plurality of memory devices, a plurality of direct memory access (DMA) controllers, and an on-chip interconnect. The on-chip interconnect may be configured to implement control logic to convey a read request from a primary DMA controller of the plurality of DMA controllers to a source memory device of the plurality of memory devices. The on-chip interconnect may be further configured to implement the control logic to convey a read response from the source memory device to the primary DMA controller and one or more secondary DMA controllers of the plurality of DMA controllers.
    Type: Application
    Filed: March 13, 2023
    Publication date: July 6, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Ruihua PENG, Monica Man Kay TANG, Xiaoling XU
  • Publication number: 20230205581
    Abstract: A computing device, including a processor configured to perform data transfer scheduling for a hardware accelerator including a plurality of processing areas. Performing data transfer scheduling may include receiving a plurality of data transfer instructions that encode requests to transfer data to respective processing areas. Performing data transfer scheduling may further include identifying a plurality of transfer path conflicts between the data transfer instructions. Performing data transfer scheduling may further include sorting the data transfer instructions into a plurality of transfer instruction subsets. Within each transfer instruction subset, none of the data transfer instructions have transfer path conflicts. For each transfer instruction subset, performing data transfer scheduling may further include conveying the data transfer instructions included in that transfer instruction subset to the hardware accelerator.
    Type: Application
    Filed: February 23, 2023
    Publication date: June 29, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Monica Man Kay TANG, Ruihua PENG, Zhuo RUAN
  • Patent number: 11675659
    Abstract: In one form, a memory controller includes a command queue, an arbiter, and a replay queue. The command queue receives and stores memory access requests. The arbiter is coupled to the command queue for providing a sequence of memory commands to a memory channel. The replay queue stores the sequence of memory commands to the memory channel, and continues to store memory access commands that have not yet received responses from the memory channel. When a response indicates a completion of a corresponding memory command without any error, the replay queue removes the corresponding memory command without taking further action. When a response indicates a completion of the corresponding memory command with an error, the replay queue replays at least the corresponding memory command. In another form, a data processing system includes the memory controller, a memory accessing agent, and a memory system to which the memory controller is coupled.
    Type: Grant
    Filed: December 9, 2016
    Date of Patent: June 13, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: James R. Magro, Ruihua Peng, Anthony Asaro, Kedarnath Balakrishnan, Scott P. Murphy, YuBin Yao
  • Patent number: 11604748
    Abstract: A computing device is provided, including a plurality of memory devices, a plurality of direct memory access (DMA) controllers, and an on-chip interconnect. The on-chip interconnect may be configured to implement control logic to convey a read request from a primary DMA controller of the plurality of DMA controllers to a source memory device of the plurality of memory devices. The on-chip interconnect may be further configured to implement the control logic to convey a read response from the source memory device to the primary DMA controller and one or more secondary DMA controllers of the plurality of DMA controllers.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: March 14, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ruihua Peng, Monica Man Kay Tang, Xiaoling Xu
  • Patent number: 11593164
    Abstract: A computing device, including a processor configured to perform data transfer scheduling for a hardware accelerator including a plurality of processing areas. Performing data transfer scheduling may include receiving a plurality of data transfer instructions that encode requests to transfer data to respective processing areas. Performing data transfer scheduling may further include identifying a plurality of transfer path conflicts between the data transfer instructions. Performing data transfer scheduling may further include sorting the data transfer instructions into a plurality of transfer instruction subsets. Within each transfer instruction subset, none of the data transfer instructions have transfer path conflicts. For each transfer instruction subset, performing data transfer scheduling may further include conveying the data transfer instructions included in that transfer instruction subset to the hardware accelerator.
    Type: Grant
    Filed: March 3, 2021
    Date of Patent: February 28, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Monica Man Kay Tang, Ruihua Peng, Zhuo Ruan
  • Publication number: 20230004400
    Abstract: A system for providing system level sleep state power savings includes a plurality of memory channels and corresponding plurality of memories coupled to respective memory channels. The system includes one or more processors operative to receive information indicating that a system level sleep state is to be entered and in response to receiving the system level sleep indication, moves data stored in at least a first of the plurality of memories to at least a second of the plurality of memories. In some implementations, in response to moving the data to the second memory, the processor causes power management logic to shut off power to: at least the first memory, to a corresponding first physical layer device operatively coupled to the first memory and to a first memory controller operatively coupled to the first memory and place the second memory in a self-refresh mode of operation.
    Type: Application
    Filed: September 13, 2022
    Publication date: January 5, 2023
    Inventors: JYOTI RAHEJA, HIDEKI KANAYAMA, GUHAN KRISHNAN, RUIHUA PENG
  • Publication number: 20220393975
    Abstract: The present disclosure relates to systems, methods, and computer-readable media for data from a first multi-dimensional memory block to a second multi-dimensional memory block. For example, systems described herein facilitate transferring data between memory blocks having different shapes from one another. The systems described herein facilitate transferring data between different shaped memory blocks by identifying shape properties and other characteristics of the data and generating a plurality of network packets having control data based on the identified shape properties and other characteristics. This data included within the network packets enables memory controllers to determine memory addresses on a destination memory block to write data from the network packets. Features described herein facilitate efficient transfer of data without generating a linearized copy that relies on constant availability of significant memory resources.
    Type: Application
    Filed: June 7, 2021
    Publication date: December 8, 2022
    Inventors: Deepak GOEL, Ruihua PENG, Xiaoling XU
  • Patent number: 11520722
    Abstract: Embodiments of the present disclosure include techniques for transferring non-power of two (2) bytes of data between modules of an integrated circuit over an on-chip communication fabric. In one embodiment, the present disclosure includes an on-chip communication fabric, a first module comprising an interface coupled to the fabric having a first data width, and a second module comprising an interface coupled to the fabric having a second data width smaller than the first data width. The non-power of two (2) bytes of data are sent between the first and second modules through the fabric, and the fabric maps the non-power of two (2) bytes of data between the first and second data widths.
    Type: Grant
    Filed: April 12, 2021
    Date of Patent: December 6, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Monica Man Kay Tang, Ruihua Peng, Matthew Willis Daniel
  • Publication number: 20220327078
    Abstract: Embodiments of the present disclosure include techniques for transferring non-power of two (2) bytes of data between modules of an integrated circuit over an on-chip communication fabric. In one embodiment, the present disclosure includes an on-chip communication fabric, a first module comprising an interface coupled to the fabric having a first data width, and a second module comprising an interface coupled to the fabric having a second data width smaller than the first data width. The non-power of two (2) bytes of data are sent between the first and second modules through the fabric, and the fabric maps the non-power of two (2) bytes of data between the first and second data widths.
    Type: Application
    Filed: April 12, 2021
    Publication date: October 13, 2022
    Inventors: Monica Man Kay TANG, Ruihua PENG, Matthew Willis Daniel
  • Patent number: 11449346
    Abstract: A system for providing system level sleep state power savings includes a plurality of memory channels and corresponding plurality of memories coupled to respective memory channels. The system includes one or more processors operative to receive information indicating that a system level sleep state is to be entered and in response to receiving the system level sleep indication, moves data stored in at least a first of the plurality of memories to at least a second of the plurality of memories. In some implementations, in response to moving the data to the second memory, the processor causes power management logic to shut off power to: at least the first memory, to a corresponding first physical layer device operatively coupled to the first memory and to a first memory controller operatively coupled to the first memory and place the second memory in a self-refresh mode of operation.
    Type: Grant
    Filed: December 18, 2019
    Date of Patent: September 20, 2022
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Jyoti Raheja, Hideki Kanayama, Guhan Krishnan, Ruihua Peng
  • Publication number: 20220283850
    Abstract: A computing device, including a processor configured to perform data transfer scheduling for a hardware accelerator including a plurality of processing areas. Performing data transfer scheduling may include receiving a plurality of data transfer instructions that encode requests to transfer data to respective processing areas. Performing data transfer scheduling may further include identifying a plurality of transfer path conflicts between the data transfer instructions. Performing data transfer scheduling may further include sorting the data transfer instructions into a plurality of transfer instruction subsets. Within each transfer instruction subset, none of the data transfer instructions have transfer path conflicts. For each transfer instruction subset, performing data transfer scheduling may further include conveying the data transfer instructions included in that transfer instruction subset to the hardware accelerator.
    Type: Application
    Filed: March 3, 2021
    Publication date: September 8, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Monica Man Kay TANG, Ruihua PENG, Zhuo RUAN
  • Publication number: 20220138130
    Abstract: A computing device is provided, including a plurality of memory devices, a plurality of direct memory access (DMA) controllers, and an on-chip interconnect. The on-chip interconnect may be configured to implement control logic to convey a read request from a primary DMA controller of the plurality of DMA controllers to a source memory device of the plurality of memory devices. The on-chip interconnect may be further configured to implement the control logic to convey a read response from the source memory device to the primary DMA controller and one or more secondary DMA controllers of the plurality of DMA controllers.
    Type: Application
    Filed: October 30, 2020
    Publication date: May 5, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Ruihua PENG, Monica Man Kay TANG, Xiaoling XU
  • Publication number: 20210191737
    Abstract: A system for providing system level sleep state power savings includes a plurality of memory channels and corresponding plurality of memories coupled to respective memory channels. The system includes one or more processors operative to receive information indicating that a system level sleep state is to be entered and in response to receiving the system level sleep indication, moves data stored in at least a first of the plurality of memories to at least a second of the plurality of memories. In some implementations, in response to moving the data to the second memory, the processor causes power management logic to shut off power to: at least the first memory, to a corresponding first physical layer device operatively coupled to the first memory and to a first memory controller operatively coupled to the first memory and place the second memory in a self-refresh mode of operation.
    Type: Application
    Filed: December 18, 2019
    Publication date: June 24, 2021
    Inventors: Jyoti Raheja, Hideki Kanayama, Guhan Krishnan, Ruihua Peng
  • Publication number: 20180018221
    Abstract: In one form, a memory controller includes a command queue, an arbiter, and a replay queue. The command queue receives and stores memory access requests. The arbiter is coupled to the command queue for providing a sequence of memory commands to a memory channel. The replay queue stores the sequence of memory commands to the memory channel, and continues to store memory access commands that have not yet received responses from the memory channel. When a response indicates a completion of a corresponding memory command without any error, the replay queue removes the corresponding memory command without taking further action. When a response indicates a completion of the corresponding memory command with an error, the replay queue replays at least the corresponding memory command. In another form, a data processing system includes the memory controller, a memory accessing agent, and a memory system to which the memory controller is coupled.
    Type: Application
    Filed: December 9, 2016
    Publication date: January 18, 2018
    Applicant: Advanced Micro Devices, Inc.
    Inventors: James R. Magro, Ruihua Peng, Anthony Asaro, Kedarnath Balakrishnan, Scott P. Murphy, YuBin Yao