Patents by Inventor Eric S. Chung

Eric S. Chung has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180205785
    Abstract: A server system is provided that includes a plurality of servers, each server including at least one hardware acceleration device and at least one processor communicatively coupled to the hardware acceleration device by an internal data bus and executing a host server instance, the host server instances of the plurality of servers collectively providing a software plane, and the hardware acceleration devices of the plurality of servers collectively providing a hardware acceleration plane that implements a plurality of hardware accelerated services, wherein each hardware acceleration device maintains in memory a data structure that contains load data indicating a load of each of a plurality of target hardware acceleration devices, and wherein a requesting hardware acceleration device routes the request to a target hardware acceleration device that is indicated by the load data in the data structure to have a lower load than other of the target hardware acceleration devices.
    Type: Application
    Filed: January 17, 2017
    Publication date: July 19, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Adrian Michael Caulfield, Eric S. Chung, Michael Konstantinos Papamichael, Douglas C. Burger, Shlomi Alkalay
  • Patent number: 10027543
    Abstract: The present invention extends to methods, systems, and computer program products for reconfiguring an acceleration component among interconnected acceleration components. Aspects of the invention facilitate reconfiguring an acceleration component among interconnected acceleration components using a higher-level software service. A manager or controller isolates an acceleration component by sending a message to one or more neighbor acceleration components instructing the one or more neighbor acceleration components to stop accepting communication from the acceleration component. The manager or controller can then shut down an application layer at the acceleration component for at least partial reconfiguration and closes input/output (I/O) portions. After reconfiguration completes, communication between the acceleration component and the one or more neighbor acceleration components can resume.
    Type: Grant
    Filed: June 26, 2015
    Date of Patent: July 17, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sitaram V. Lanka, Adrian M. Caulfield, Eric S. Chung, Andrew R. Putnam, Douglas C. Burger, Derek T. Chiou
  • Publication number: 20180191617
    Abstract: Systems and methods for flow control and congestion management of messages among acceleration components (ACs) configurable to accelerate a service are provided. An example system comprises a software plane including host components configured to execute instructions corresponding to a service and an acceleration plane including ACs configurable to accelerate the service. In a first mode a sending AC is configured to, in response to receiving a first indication from a receiving AC, send subsequent packets corresponding to a first message associated with the service using a larger inter-packet gap than an inter-packet gap used for previous packets corresponding to the first message associated with the service, and in the second mode the sending AC is configured to, in response to receiving a second indication from the receiving AC, delay a transmission of a next packet corresponding to the first message associated with the service.
    Type: Application
    Filed: February 10, 2017
    Publication date: July 5, 2018
    Inventors: Adrian M. Caulfield, Eric S. Chung, Michael Papamichael
  • Publication number: 20180191609
    Abstract: Components, methods, and systems allowing acceleration components to transmit messages are provided. An acceleration component for use among a first plurality of acceleration components, associated with a first top-of-rack (TOR) switch, to transmit messages to other acceleration components in an acceleration plane configurable to provide service acceleration for a service is provided. The acceleration component includes a transport component configured to transmit a first point-to-point message to a second acceleration component, associated with a second TOR switch different form the first TOR switch, and to a third acceleration component, associated with a third TOR switch different from the first TOR switch and the second TOR switch. The transport component may be configured to broadcast a second point-to-point message to all of a second plurality of acceleration components associated with the second TOR switch and to all of a third plurality of acceleration components associated with the third TOR switch.
    Type: Application
    Filed: January 2, 2017
    Publication date: July 5, 2018
    Inventors: Adrian M. Caulfield, Eric S. Chung, Michael Papamichael
  • Patent number: 9983938
    Abstract: Aspects extend to methods, systems, and computer program products for locally restoring functionality at acceleration components. A role can be locally restored at an acceleration component when an error is self-detected at the acceleration component (e.g., by local monitoring logic). Locally restoring a role can include resetting internal state (application logic) of the acceleration component providing the role. Self-detection of errors and local restoration of a role is less resource intensive and more efficient than using external components (e.g., high-level services) to restore functionality at an acceleration component and/or to reset an entire graph. Monitoring logic at multiple acceleration components can locally reset roles in parallel to restore legitimate behavior of a graph.
    Type: Grant
    Filed: June 26, 2015
    Date of Patent: May 29, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Stephen F. Heil, Sitaram V. Lanka, Adrian M. Caulfield, Eric S. Chung, Andrew R. Putnam, Douglas C. Burger, Yi Xiao
  • Patent number: 9847980
    Abstract: To protect customer data and provide increased workflow security for processing requested by a customer, a secure communicational channel can be established between a customer and one or more hardware accelerators such that even processes executing on a host computing device hosting such hardware accelerators are excluded from the secure communicational channel. An encrypted bitstream is provided to hardware accelerators and the hardware accelerators obtain therefrom cryptographic information supporting the secure communicational channel with the customer. Such cryptographic information is stored and used exclusively from within the hardware accelerator, rendering it inaccessible to processes executing on a host computing device. The cryptographic information can be a shared secret, an appropriate one of a pair of cryptographic keys, or other like cryptographic information.
    Type: Grant
    Filed: June 17, 2015
    Date of Patent: December 19, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Douglas Christopher Burger, Eric S. Chung, Kenneth Eguro
  • Publication number: 20170351321
    Abstract: Dynamic power routing is utilized to route power from other components, which are transitioned to lower power consuming states, in order to accommodate more efficient processing of computational tasks by hardware accelerators, thereby staying within electrical power thresholds that would otherwise not have accommodated simultaneous full-power operation of the other components and such hardware accelerators. Once a portion of a workflow is being processed by hardware accelerators, the workflow, or the hardware accelerators, can be self-throttling to stay within power thresholds, or they can be throttled by independent coordinators, including device-centric and system-wide coordinators.
    Type: Application
    Filed: August 25, 2017
    Publication date: December 7, 2017
    Inventors: Andrew R. Putnam, Douglas Christopher Burger, Stephen F. Heil, Eric S. Chung, Adrian M. Caulfield
  • Patent number: 9760159
    Abstract: Dynamic power routing is utilized to route power from other components, which are transitioned to lower power consuming states, in order to accommodate more efficient processing of computational tasks by hardware accelerators, thereby staying within electrical power thresholds that would otherwise not have accommodated simultaneous full-power operation of the other components and such hardware accelerators. Once a portion of a workflow is being processed by hardware accelerators, the workflow, or the hardware accelerators, can be self-throttling to stay within power thresholds, or they can be throttled by independent coordinators, including device-centric and system-wide coordinators.
    Type: Grant
    Filed: April 8, 2015
    Date of Patent: September 12, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Andrew R. Putnam, Douglas Christopher Burger, Stephen F. Heil, Eric S. Chung, Adrian M. Caulfield
  • Patent number: 9652327
    Abstract: Aspects extend to methods, systems, and computer program products for reassigning service functionality between acceleration components. Reassigning service functionality can be used to recover service acceleration for a service. Service acceleration can operate improperly due to performance degradation at an acceleration component. A role at the acceleration component having degraded performance can be assigned to another acceleration component to restore service acceleration for the service.
    Type: Grant
    Filed: June 26, 2015
    Date of Patent: May 16, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Stephen F. Heil, Sitaram V. Lanka, Adrian M. Caulfield, Eric S. Chung, Andrew R. Putnam, Douglas C. Burger, Yi Xiao
  • Patent number: 9606836
    Abstract: Specialized processing devices comprise both processing circuitry that is pre-configured to perform a discrete set of computing operations more quickly than generalized central processing units and network transport circuitry that communicationally couples each individual specialized processing device to a network as its own unique network client. Requests for hardware acceleration from workflows being executed by generalized central processing units of server computing devices are directed to hardware accelerators in accordance with a table associating available hardware accelerators with the computing operations they are optimized to perform. Load balancing, as well as dynamic modifications in available hardware accelerators, is accomplished through updates to such a table.
    Type: Grant
    Filed: June 9, 2015
    Date of Patent: March 28, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Douglas Christopher Burger, Adrian M. Caulfield, Eric S. Chung, Andrew R. Putnam
  • Publication number: 20160373416
    Abstract: To protect customer data and provide increased workflow security for processing requested by a customer, a secure communicational channel can be established between a customer and one or more hardware accelerators such that even processes executing on a host computing device hosting such hardware accelerators are excluded from the secure communicational channel. An encrypted bitstream is provided to hardware accelerators and the hardware accelerators obtain therefrom cryptographic information supporting the secure communicational channel with the customer. Such cryptographic information is stored and used exclusively from within the hardware accelerator, rendering it inaccessible to processes executing on a host computing device. The cryptographic information can be a shared secret, an appropriate one of a pair of cryptographic keys, or other like cryptographic information.
    Type: Application
    Filed: June 17, 2015
    Publication date: December 22, 2016
    Inventors: Douglas Christopher Burger, Eric S. Chung, Kenneth Eguro
  • Publication number: 20160364271
    Abstract: Specialized processing devices comprise both processing circuitry that is pre-configured to perform a discrete set of computing operations more quickly than generalized central processing units and network transport circuitry that communicationally couples each individual specialized processing device to a network as its own unique network client. Requests for hardware acceleration from workflows being executed by generalized central processing units of server computing devices are directed to hardware accelerators in accordance with a table associating available hardware accelerators with the computing operations they are optimized to perform. Load balancing, as well as dynamic modifications in available hardware accelerators, is accomplished through updates to such a table.
    Type: Application
    Filed: June 9, 2015
    Publication date: December 15, 2016
    Inventors: Douglas Christopher Burger, Adrian M. Caulfield, Eric S. Chung, Andrew R. Putnam
  • Publication number: 20160306701
    Abstract: Aspects extend to methods, systems, and computer program products for locally restoring functionality at acceleration components. A role can be locally restored at an acceleration component when an error is self-detected at the acceleration component (e.g., by local monitoring logic). Locally restoring a role can include resetting internal state (application logic) of the acceleration component providing the role. Self-detection of errors and local restoration of a role is less resource intensive and more efficient than using external components (e.g., high-level services) to restore functionality at an acceleration component and/or to reset an entire graph. Monitoring logic at multiple acceleration components can locally reset roles in parallel to restore legitimate behavior of a graph.
    Type: Application
    Filed: June 26, 2015
    Publication date: October 20, 2016
    Inventors: Stephen F. Heil, Sitaram V. Lanka, Adrian M. Caulfield, Eric S. Chung, Andrew R. Putnam, Douglas C. Burger, Yi Xiao
  • Publication number: 20160306668
    Abstract: A data processing system is described herein that includes two or more software-driven host components that collectively provide a software plane. The data processing system further includes two or more hardware acceleration components that collectively provide a hardware acceleration plane. The hardware acceleration plane implements one or more services, including at least one multi-component service. The multi-component service has plural parts, and is implemented on a collection of two or more hardware acceleration components, where each hardware acceleration component in the collection implements a corresponding part of the multi-component service. Each hardware acceleration component in the collection is configured to interact with other hardware acceleration components in the collection without involvement from any host component. A function parsing component is also described herein that determines a manner of parsing a function into the plural parts of the multi-component service.
    Type: Application
    Filed: May 20, 2015
    Publication date: October 20, 2016
    Inventors: Stephen F. Heil, Adrian M. Caulfield, Douglas C. Burger, Andrew R. Putnam, Eric S. Chung
  • Publication number: 20160306700
    Abstract: Aspects extend to methods, systems, and computer program products for reassigning service functionality between acceleration components. Reassigning service functionality can be used to recover service acceleration for a service. Service acceleration can exhibit operate improperly caused by performance degradation at an acceleration component. A role at the acceleration component having degrade performance can be assigned to another acceleration component to restore service acceleration for the service.
    Type: Application
    Filed: June 26, 2015
    Publication date: October 20, 2016
    Inventors: Stephen F. Heil, Sitaram V. Lanka, Adrian M. Caulfield, Eric S. Chung, Andrew R. Putnam, Douglas C. Burger, Yi Xiao
  • Publication number: 20160308718
    Abstract: The present invention extends to methods, systems, and computer program products for reconfiguring an acceleration component among interconnected acceleration components. Aspects of the invention facilitate reconfiguring an acceleration component among interconnected acceleration components using a higher-level software service. A manager or controller isolates an acceleration component by sending a message to one or more neighbor acceleration components instructing the one or more neighbor acceleration components to stop accepting communication from the acceleration component. The manager or controller can then shut down an application layer at the acceleration component for at least partial reconfiguration and closes input/output (I/O) portions.
    Type: Application
    Filed: June 26, 2015
    Publication date: October 20, 2016
    Inventors: Sitaram V. Lanka, Adrian M. Caulfield, Eric S. Chung, Andrew R. Putnam, Douglas C. Burger, Derek T. Chiou
  • Publication number: 20160308649
    Abstract: A service mapping component (SMC) is described herein for allocating services to hardware acceleration components in a data processing system based on different kinds of triggering events. The data processing system is characterized by a hardware acceleration plane that is made up of the hardware acceleration components, together with a software plane that is made up of a plurality of software-driven host components. The SMC is configured to select, in response to a triggering event, at least one hardware acceleration component in the hardware plane to perform a service, based on at least one mapping consideration and based on availability information. Each host component in the software plane is then configured to access the service on one or more of the selected hardware acceleration component(s) via an associated local hardware acceleration component, or via some other route.
    Type: Application
    Filed: May 20, 2015
    Publication date: October 20, 2016
    Inventors: Douglas C. Burger, Eric S. Chung, James R. Larus, Jan S. Gray, Andrew R. Putnam, Stephen F. Heil
  • Publication number: 20160299553
    Abstract: Dynamic power routing is utilized to route power from other components, which are transitioned to lower power consuming states, in order to accommodate more efficient processing of computational tasks by hardware accelerators, thereby staying within electrical power thresholds that would otherwise not have accommodated simultaneous full-power operation of the other components and such hardware accelerators. Once a portion of a workflow is being processed by hardware accelerators, the workflow, or the hardware accelerators, can be self-throttling to stay within power thresholds, or they can be throttled by independent coordinators, including device-centric and system-wide coordinators.
    Type: Application
    Filed: April 8, 2015
    Publication date: October 13, 2016
    Inventors: Andrew R. Putnam, Douglas Christopher Burger, Stephen F. Heil, Eric S. Chung, Adrian M. Caulfield
  • Publication number: 20140351239
    Abstract: A hardware device is used to accelerate query operators including Where, Select, SelectMany, Aggregate, Join, GroupBy and GroupByAggregate. A program that includes query operators is processed to create a query plan. A hardware template associated with the query operators in the query plan is used to configure the hardware device to implement each query operator. The hardware device can be configured to operate in one or more of a partition mode, hash table mode, filter and map mode, and aggregate mode according to the hardware template. During the various modes, configurable cores are used to implement aspects of the query operators including user-defined lambda functions. The memory structures in the hardware device are also configurable and used to implement aspects of the query operators. The hardware device can be implemented using a Field Programmable Gate Array or an Application Specific Integrated Circuit.
    Type: Application
    Filed: May 23, 2013
    Publication date: November 27, 2014
    Applicant: Microsoft Corporation
    Inventors: John D. Davis, Eric S. Chung