Patents by Inventor Khoa To

Khoa To has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12015563
    Abstract: Embodiments of network processing resource management in computing devices are disclosed therein. In one embodiment, a method includes receiving a request from a network interface controller to perform network processing operations at a first core of a main processor for packets assigned by the network interface controller to a queue of a virtual port of the network interface controller. The method also includes determining whether the first core has a utilization level higher than a threshold when performing the network processing operations to effect processing and transmission of the packets. If the first core has a utilization level higher than the threshold, the method includes issuing a command to the network interface to modify affinitization of the queue from the first core to a second core having a utilization level lower than the threshold.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: June 18, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Dmitry Malloy, Alireza Dabagh, Gabriel Silva, Khoa To, Omar Cardona, Donald Stanwyck
  • Publication number: 20230056312
    Abstract: Provided are methods for producing bimodal polyolefins comprising the steps of contacting ?-olefin monomers with a catalyst in slurry polymerization conditions in the presence of zero to minimum hydrogen to produce a high molecular weight polyolefin and contacting additional ?-olefin monomers in gas phase polymerization conditions and the high molecular weight polyolefin and the catalyst to produce bimodal polyolefin having high stiffness and broad molecular weight distribution. An additional step of polymerizing the bimodal polyolefin with a comonomer in a second gas phase can provide a bimodal impact copolymer having high stiffness and broad molecular weight distribution. Among the advantages of the present methods, bimodal polyolefins can be produced in a continuous process between a slurry polymerization reactor and a gas phase polymerization reactor without a venting step in between and with minimal hydrogen in the slurry polymerization reactor.
    Type: Application
    Filed: January 7, 2021
    Publication date: February 23, 2023
    Applicant: ExxonMobil Chemical Patents Inc.
    Inventors: Xiaodan ZHANG, Christopher G. BAUCH, Todd S. EDWARDS, Mark S. CHAHL, Blu E. ENGLEHORN, Khoa TO, Steven L. LAMBERT
  • Patent number: 11283718
    Abstract: Embodiments of hybrid network processing load distribution in a computing device are disclosed therein. In one embodiment, a method includes receiving, at a main processor, an indication from the network interface controller to perform network processing operations for first and second packets in a queue of a virtual port of the network interface controller, and in response to receiving the request, assigning multiple cores for performing the network processing operations for the first and second packets, respectively. The method also includes performing the network processing operations at the multiple cores to effect processing and transmission of the first and second packets to first and second applications, respectively, both the first and second applications executing in a virtual machine hosted on the computing device.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: March 22, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Dmitry Malloy, Alireza Dabagh, Gabriel Silva, Khoa To, Omar Cardona, Donald Stanwyck
  • Publication number: 20210105221
    Abstract: Embodiments of network processing resource management in computing devices are disclosed therein. In one embodiment, a method includes receiving a request from a network interface controller to perform network processing operations at a first core of a main processor for packets assigned by the network interface controller to a queue of a virtual port of the network interface controller. The method also includes determining whether the first core has a utilization level higher than a threshold when performing the network processing operations to effect processing and transmission of the packets. If the first core has a utilization level higher than the threshold, the method includes issuing a command to the network interface to modify affinitization of the queue from the first core to a second core having a utilization level lower than the threshold.
    Type: Application
    Filed: September 25, 2020
    Publication date: April 8, 2021
    Inventors: Dmitry Malloy, Alireza Dabagh, Gabriel Silva, Khoa To, Omar Cardona, Donald Stanwyck
  • Patent number: 10826841
    Abstract: Embodiments of network processing resource management in computing devices are disclosed therein. An example method includes receiving a request from a network interface controller to perform network processing operations at a first core of a main processor for packets assigned by the network interface controller to a queue of a virtual port of the network interface controller. The method also includes determining whether the first core has a utilization level higher than a threshold when performing the network processing operations to effect processing and transmission of the packets. If the first core has a utilization level higher than the threshold, the method includes issuing a command to the network interface to modify affinitization of the queue from the first core to a second core having a utilization level lower than the threshold.
    Type: Grant
    Filed: March 15, 2017
    Date of Patent: November 3, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Dmitry Malloy, Alireza Dabagh, Gabriel Silva, Khoa To, Omar Cardona, Donald Stanwyck
  • Patent number: 10715424
    Abstract: Techniques of network traffic management in a computing device are disclosed. One example method includes receiving, at a main processor, a request from a network interface controller to perform network processing operations for packets assigned by the network interface controller to a queue of a virtual port of the network interface controller. The method also includes, in response to receiving the request, causing one of multiple cores of the main processor with which the queue of the virtual port is affinitized to perform the network processing operations to effect processing and transmission of the packets to an application executing in a virtual machine hosted on the computing device.
    Type: Grant
    Filed: March 15, 2017
    Date of Patent: July 14, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Dmitry Malloy, Alireza Dabagh, Gabriel Silva, Khoa To, Omar Cardona, Donald Stanwyck
  • Publication number: 20200127922
    Abstract: Embodiments of hybrid network processing load distribution in a computing device are disclosed therein. In one embodiment, a method includes receiving, at a main processor, an indication from the network interface controller to perform network processing operations for first and second packets in a queue of a virtual port of the network interface controller, and in response to receiving the request, assigning multiple cores for performing the network processing operations for the first and second packets, respectively. The method also includes performing the network processing operations at the multiple cores to effect processing and transmission of the first and second packets to first and second applications, respectively, both the first and second applications executing in a virtual machine hosted on the computing device.
    Type: Application
    Filed: December 17, 2019
    Publication date: April 23, 2020
    Inventors: Dmitry Malloy, Alireza Dabagh, Gabriel Silva, Khoa To, Omar Cardona, Donald Stanwyck
  • Patent number: 10630601
    Abstract: Micro-schedulers control bandwidth allocation for clients, each client subscribing to a respective predefined portion of bandwidth of an outgoing communication link. A macro-scheduler controls the micro-schedulers, by allocating the respective subscribed portion of bandwidth associated with each respective client that is active, by a predefined first deadline, with residual bandwidth that is unused by the respective clients being shared proportionately among respective active clients by a predefined second deadline, while minimizing coordination among micro-schedulers by the macro-scheduler periodically adjusting respective bandwidth allocations to each micro-scheduler.
    Type: Grant
    Filed: September 6, 2018
    Date of Patent: April 21, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Khoa To, Jitendra Padhye, George Varghese, Daniel Firestone
  • Patent number: 10554554
    Abstract: Embodiments of hybrid network processing load distribution in a computing device are disclosed therein. In one embodiment, a method includes receiving, at a main processor, an indication from the network interface controller to perform network processing operations for first and second packets in a queue of a virtual port of the network interface controller, and in response to receiving the request, assigning first and second cores for performing the network processing operations for the first and second packets, respectively. The method also includes performing the network processing operations at the first and second cores to effect processing and transmission of the first and second packets to first and second applications, respectively, both the first and second applications executing in a virtual machine hosted on the computing device.
    Type: Grant
    Filed: March 15, 2017
    Date of Patent: February 4, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Dmitry Malloy, Alireza Dabagh, Gabriel Silva, Khoa To, Omar Cardona, Donald Stanwyck
  • Publication number: 20190007338
    Abstract: Micro-schedulers control bandwidth allocation for clients, each client subscribing to a respective predefined portion of bandwidth of an outgoing communication link. A macro-scheduler controls the micro-schedulers, by allocating the respective subscribed portion of bandwidth associated with each respective client that is active, by a predefined first deadline, with residual bandwidth that is unused by the respective clients being shared proportionately among respective active clients by a predefined second deadline, while minimizing coordination among micro-schedulers by the macro-scheduler periodically adjusting respective bandwidth allocations to each micro-scheduler.
    Type: Application
    Filed: September 6, 2018
    Publication date: January 3, 2019
    Inventors: Khoa To, Jitendra Padhye, George Varghese, Daniel Firestone
  • Patent number: 10097478
    Abstract: Micro-schedulers control bandwidth allocation for clients, each client subscribing to a respective predefined portion of bandwidth of an outgoing communication link. A macro-scheduler controls the micro-schedulers, by allocating the respective subscribed portion of bandwidth associated with each respective client that is active, by a predefined first deadline, with residual bandwidth that is unused by the respective clients being shared proportionately among respective active clients by a predefined second deadline, while minimizing coordination among micro-schedulers by the macro-scheduler periodically adjusting respective bandwidth allocations to each micro-scheduler.
    Type: Grant
    Filed: January 20, 2015
    Date of Patent: October 9, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Khoa To, Jitendra Padhye, George Varghese, Daniel Firestone
  • Publication number: 20180157515
    Abstract: Embodiments of network processing resource management in computing devices are disclosed therein. In one embodiment, a method includes receiving a request from a network interface controller to perform network processing operations at a first core of a main processor for packets assigned by the network interface controller to a queue of a virtual port of the network interface controller. The method also includes determining whether the first core has a utilization level higher than a threshold when performing the network processing operations to effect processing and transmission of the packets. If the first core has a utilization level higher than the threshold, the method includes issuing a command to the network interface to modify affinitization of the queue from the first core to a second core having a utilization level lower than the threshold.
    Type: Application
    Filed: March 15, 2017
    Publication date: June 7, 2018
    Inventors: Dmitry Malloy, Alireza Dabagh, Gabriel Silva, Khoa To, Omar Cardona, Donald Stanwyck
  • Publication number: 20180157514
    Abstract: Embodiments of network traffic management in a computing device are disclosed therein. In one embodiment, a method includes receiving, at a main processor, a request from a network interface controller to perform network processing operations for packets assigned by the network interface controller to a queue of a virtual port of the network interface controller. The method also includes, in response to receiving the request, causing one of multiple cores of the main processor with which the queue of the virtual port is affinitized to perform the network processing operations to effect processing and transmission of the packets to an application executing in a virtual machine hosted on the computing device.
    Type: Application
    Filed: March 15, 2017
    Publication date: June 7, 2018
    Inventors: Dmitry Malloy, Alireza Dabagh, Gabriel Silva, Khoa To, Omar Cardona, Donald Stanwyck
  • Publication number: 20180159771
    Abstract: Embodiments of hybrid network processing load distribution in a computing device are disclosed therein. In one embodiment, a method includes receiving, at a main processor, an indication from the network interface controller to perform network processing operations for first and second packets in a queue of a virtual port of the network interface controller, and in response to receiving the request, assigning first and second cores for performing the network processing operations for the first and second packets, respectively. The method also includes performing the network processing operations at the first and second cores to effect processing and transmission of the first and second packets to first and second applications, respectively, both the first and second applications executing in a virtual machine hosted on the computing device.
    Type: Application
    Filed: March 15, 2017
    Publication date: June 7, 2018
    Inventors: Dmitry Malloy, Alireza Dabagh, Gabriel Silva, Khoa To, Omar Cardona, Donald Stanwyck
  • Publication number: 20160212065
    Abstract: Micro-schedulers control bandwidth allocation for clients, each client subscribing to a respective predefined portion of bandwidth of an outgoing communication link. A macro-scheduler controls the micro-schedulers, by allocating the respective subscribed portion of bandwidth associated with each respective client that is active, by a predefined first deadline, with residual bandwidth that is unused by the respective clients being shared proportionately among respective active clients by a predefined second deadline, while minimizing coordination among micro-schedulers by the macro-scheduler periodically adjusting respective bandwidth allocations to each micro-scheduler.
    Type: Application
    Filed: January 20, 2015
    Publication date: July 21, 2016
    Inventors: Khoa To, Jitendra Padhye, George Varghese, Daniel Firestone