Patents by Inventor Douglas C. Burger

Douglas C. Burger has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10257342
    Abstract: Techniques are described for validating stateful app links. Validation can be performed when stateful app links are created, activated, shared, or at other times. Validation can be performed to determine whether a stateful app link has a dependency on a resource external to the mobile application. Validation can also be performed to detect other issues, such as security issues, privacy issues, or other issues.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: April 9, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Oriana Riva, Suman Kumar Nath, Md Tanzirul Azim, Douglas C. Burger
  • Patent number: 10216555
    Abstract: Aspects extend to methods, systems, and computer program products for partially reconfiguring acceleration components. Partial reconfiguration can be implemented for any of a variety of reasons, including to address an error in functionality at the acceleration component or to update functionality at the acceleration component. During partial reconfiguration, connectivity can be maintained for any other functionality at the acceleration component untouched by the partial reconfiguration. Partial reconfiguration is more efficient to deploy than full reconfiguration of an acceleration component.
    Type: Grant
    Filed: June 26, 2015
    Date of Patent: February 26, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Derek T. Chiou, Sitaram V. Lanka, Adrian M. Caulfield, Andrew R. Putnam, Douglas C. Burger
  • Publication number: 20190057303
    Abstract: Processors and methods for neural network processing are provided. A method in a processor including a matrix vector unit is provided. The method includes receiving vector data and actuation vector data corresponding to at least one layer of a neural network model for processing using the matrix vector unit, where each of digital values corresponding to the vector data and the actuation vector data is represented in a sign magnitude format. The method further includes converting each of the digital values corresponding to at least one of the vector data or the actuation vector data to corresponding analog values and multiplying the vector data and the actuation vector data in an analog domain and providing corresponding multiplication results in a digital domain.
    Type: Application
    Filed: August 18, 2017
    Publication date: February 21, 2019
    Inventor: Douglas C. Burger
  • Patent number: 10198294
    Abstract: A service mapping component (SMC) is described herein for processing requests by instances of tenant functionality that execute on software-driven host components (or some other components) in a data processing system. The SMC is configured to apply at least one rule to determine whether a service requested by an instance of tenant functionality is to be satisfied by at least one of: a local host component, a local hardware acceleration component which is locally coupled to the local host component, and/or at least one remote hardware acceleration component that is indirectly accessible to the local host component via the local hardware acceleration component. In performing its analysis, the SMC can take into account various factors, such as whether or not the service corresponds to a line-rate service, latency-related considerations, security-related considerations, and so on.
    Type: Grant
    Filed: May 20, 2015
    Date of Patent: February 5, 2019
    Assignee: Microsoft Licensing Technology, LLC
    Inventors: Derek T. Chiou, Sitaram V. Lanka, Douglas C. Burger
  • Patent number: 10198263
    Abstract: Apparatus and methods are disclosed for nullifying one or more registers identified in a target field of a nullification instruction. In some examples of the disclosed technology, an apparatus can include memory and one or more block-based processor cores configured to fetch and execute a plurality of instruction blocks. One of the cores can include a control unit configured, based at least in part on receiving a nullification instruction, to obtain a register identification of at least one of a plurality of registers, based on a target field of the nullification instruction. A write to the at least one register associated with the register identification is nullified. The nullification instruction is in a first instruction block of the plurality of instruction blocks. Based on the nullified write to the at least one register, a subsequent instruction is executed from a second, different instruction block.
    Type: Grant
    Filed: March 3, 2016
    Date of Patent: February 5, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Douglas C. Burger, Aaron L. Smith
  • Patent number: 10180840
    Abstract: Apparatus and methods are disclosed for dynamic nullification of memory access instructions, such as memory store instructions. In some examples of the disclosed technology, an apparatus can include memory and one or more block-based processor cores. One of the cores can include an execution unit configured to execute memory access instructions comprising a plurality of memory load and/or memory store instructions contained in an instruction block. The core can also include a hardware structure storing data for at least one predicate instruction in the instruction block, the data identifying whether one or more of the memory store instructions will issue if a condition of the predicate instruction is satisfied. The core may further include a control unit configured to control issuing of the memory access instructions to the execution unit based at least in a part on the hardware structure data.
    Type: Grant
    Filed: December 23, 2015
    Date of Patent: January 15, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Douglas C. Burger, Aaron L. Smith
  • Publication number: 20190012209
    Abstract: A service mapping component (SMC) is described herein for processing requests by instances of tenant functionality that execute on software-driven host components (or some other components) in a data processing system. The SMC is configured to apply at least one rule to determine whether a service requested by an instance of tenant functionality is to be satisfied by at least one of: a local host component, a local hardware acceleration component which is locally coupled to the local host component, and/or at least one remote hardware acceleration component that is indirectly accessible to the local host component via the local hardware acceleration component. In performing its analysis, the SMC can take into account various factors, such as whether or not the service corresponds to a line-rate service, latency-related considerations, security-related considerations, and so on.
    Type: Application
    Filed: September 11, 2018
    Publication date: January 10, 2019
    Inventors: Derek T. Chiou, Sitaram V. Lanka, Douglas C. Burger
  • Patent number: 10167800
    Abstract: Processors and methods for neural network processing are provided. A method includes receiving vector data corresponding to a layer of a neural network model, where each of the vector data has a value comprising at least one exponent. The method further includes first processing a first subset of the vector data to determine a first shared exponent for representing values in the first subset of the vector data in a block-floating point format and second processing a second subset of the vector data to determine a second shared exponent for representing values in the second subset of the vector data in a block-floating point format in a manner that no vector data from the second subset of the vector data influences a determination of the first shared exponent and no vector data from the first subset of the vector data influences a determination of the second shared exponent.
    Type: Grant
    Filed: August 18, 2017
    Date of Patent: January 1, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Eric S. Chung, Douglas C. Burger, Daniel Lo, Kalin Ovtcharov
  • Publication number: 20180349196
    Abstract: A data processing system is described herein that includes two or more software-driven host components that collectively provide a software plane. The data processing system further includes two or more hardware acceleration components that collectively provide a hardware acceleration plane. The hardware acceleration plane implements one or more services, including at least one multi-component service. The multi-component service has plural parts, and is implemented on a collection of two or more hardware acceleration components, where each hardware acceleration component in the collection implements a corresponding part of the multi-component service. Each hardware acceleration component in the collection is configured to interact with other hardware acceleration components in the collection without involvement from any host component. A function parsing component is also described herein that determines a manner of parsing a function into the plural parts of the multi-component service.
    Type: Application
    Filed: August 9, 2018
    Publication date: December 6, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Stephen F. Heil, Adrian M. Caulfield, Douglas C. Burger, Andrew R. Putnam, Eric S. Chung
  • Publication number: 20180341488
    Abstract: Systems and methods are disclosed for block-based or Explicit Data Graph Execution (EDGE) processors that can predispatch instructions for a next instruction block before a current instruction block has committed. Instruction state, including instruction scheduler instruction state and other decoded control state can be stored in one or more memories. As individual instructions of a current instruction block issue, instructions for a next instruction block can be fetched, decoded, and the generated instruction state stored in the memory at the now-unused instruction slot locations. The next instruction block can be determined speculatively, or non-speculatively. Prior to committing the first instruction block, the instruction state is stored in one or more of the now-unused instruction slot locations.
    Type: Application
    Filed: May 26, 2017
    Publication date: November 29, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventor: Douglas C. Burger
  • Publication number: 20180329708
    Abstract: Apparatus and methods are disclosed for nullifying memory store instructions and one or more registers identified in a target field of a nullification instruction. In some examples of the disclosed technology, an apparatus can include memory and one or more block-based processor cores configured to fetch and execute a plurality of instruction blocks. One of the cores can include a control unit configured, based at least in part on receiving a nullification instruction, to obtain an instruction identification for a memory access instruction of a plurality of memory access instructions and a register identification of at least one of a plurality of registers, based on a first and second target fields of the nullification instruction. The at least one register and the memory access instruction associated with the instruction identification are nullified. Based on the nullified memory access instruction, a subsequent memory access instruction is executed.
    Type: Application
    Filed: July 23, 2018
    Publication date: November 15, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Douglas C. Burger, Aaron L. Smith
  • Publication number: 20180321996
    Abstract: Computer systems and methods for generating and interacting with a micro-service framework are provided. A micro-service corresponds to one or more deep link/API calls that carry out some particular function. A static analysis of an app is conducted, from one or more starting sources of the app to identify one or more valid and feasible execution paths, as well as corresponding input parameters within the app. Each valid execution path with corresponding input parameters represent a “deep link” or “API” for that app. The information regarding the deep link is collected and stored as a micro-service in a micro-service catalog. A micro-service framework is implemented that receives a micro-service request (i.e., a request that the micro-service be carried out on behalf of a computer user) from a UX client and executes that micro-service request via execution of the deep link.
    Type: Application
    Filed: May 4, 2017
    Publication date: November 8, 2018
    Inventors: Oriana RIVA, Suman K. NATH, Douglas C. BURGER, Yongjian HU
  • Patent number: 10095519
    Abstract: Apparatus and methods are disclosed for controlling instruction flow in block-based processor architectures. In one example of the disclosed technology, an instruction block address register stores an index address to a memory storing a plurality of instructions for an instruction block, the indexed address being inaccessible when the processor is in one or more unprivileged operational modes, one or more execution units configured to execute instructions for the instruction block, and a control unit configured to fetch and decode two or more of the plurality of instructions from the memory based on the indexed address.
    Type: Grant
    Filed: March 3, 2016
    Date of Patent: October 9, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Douglas C. Burger, Aaron L. Smith
  • Patent number: 10080495
    Abstract: Methods and systems for determining a physiological parameter of a subject through interrogation of an eye of the subject with an optical signal are described. Interrogation is performed unobtrusively. The physiological parameter is determined from a signal sensed from the eye of a subject when the eye of the subject is properly aligned with regard to an interrogation signal source and/or response signal sensor.
    Type: Grant
    Filed: September 26, 2014
    Date of Patent: September 25, 2018
    Assignee: Elwha LLC
    Inventors: Allen L. Brown, Jr., Douglas C. Burger, Eric Horvitz, Roderick A. Hyde, Edward K. Y. Jung, Jordin T. Kare, Chris Demetrios Karkanias, Eric C. Leuthardt, John L. Manferdelli, Craig J. Mundie, Nathan P. Myhrvold, Barney Pell, Clarence T. Tegreene, Willard H. Wattenburg, Charles Whitmer, Lowell L. Wood, Jr., Richard N. Zare
  • Publication number: 20180267807
    Abstract: Systems and methods are disclosed for supporting debugging of programs in block-based processor architectures. In one example of the disclosed technology, a processor includes an exception event handler, a memory interface, at least one block-based processor core coupled to the memory interface and configured to responsive to receiving an exception event signal while executing an instruction block, store state data for the core generated by executing the instruction block, transfer control of the core to a second instruction block, and resume execution of the first instruction by restoring state for the processor core from the stored state data.
    Type: Application
    Filed: May 15, 2017
    Publication date: September 20, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Douglas C. Burger, Gagan Gupta
  • Patent number: 10079929
    Abstract: Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to enhance a user's ability to operate or function in a transportation-related context as a pedestrian or a vehicle operator. In one embodiment, the AEFS is configured perform vehicular threat detection based on information received at a road-based device, such as a sensor or processor that is deployed at the side of a road. An example AEFS receives, at a road-based device, information about a first vehicle that is proximate to the road-based device. The AEFS analyzes the received information to determine threat information, such as that the vehicle may collide with the user. The AEFS then informs the user of the determined threat information, such as by transmitting a warning to a wearable device configured to present the warning to the user.
    Type: Grant
    Filed: June 9, 2016
    Date of Patent: September 18, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Richard T. Lord, Robert W. Lord, Nathan P. Myhrvold, Clarence T. Tegreene, Roderick A. Hyde, Lowell L. Wood, Jr., Muriel Y. Ishikawa, Victoria Y. H. Wood, Charles Whitmer, Paramvir Bahl, Douglas C. Burger, Ranveer Chandra, William H. Gates, III, Paul Holman, Jordin T. Kare, Craig J. Mundie, Tim Paek, Desney S. Tan, Lin Zhong, Matthew G. Dyor
  • Publication number: 20180247186
    Abstract: Hardware and methods for neural network processing are provided. A method in a hardware node including a pipeline having a matrix vector unit (MVU), a first multifunction unit connected to receive an input from the matrix vector unit, a second multifunction unit connected to receive an output from the first multifunction unit, and a third multifunction unit connected to receive an output from the second multifunction unit is provided. The method includes performing using the MVU a first type of instruction that can only be performed by the MVU to generate a first result. The method further includes performing a second type of instruction that can only be performed by one of the multifunction units and generating a second result and without storing the any of the two results in a global register, passing the second result to the second multifunction and the third multifunction unit.
    Type: Application
    Filed: June 29, 2017
    Publication date: August 30, 2018
    Inventors: Jeremy Fowers, Eric S. Chung, Douglas C. Burger
  • Publication number: 20180247185
    Abstract: Processors and methods for neural network processing are provided. A method in a processor including a pipeline having a matrix vector unit (MVU), a first multifunction unit connected to receive an input from the matrix vector unit, a second multifunction unit connected to receive an output from the first multifunction unit, and a third multifunction unit connected to receive an output from the second multifunction unit is provided. The method includes decoding a chain of instructions received via an input queue, where the chain of instructions comprises a first instruction that can only be processed by the matrix vector unit and a sequence of instructions that can only be processed by a multifunction unit. The method includes processing the first instruction using the MVU and processing each of instructions in the sequence of instructions depending upon a position of the each of instructions in the sequence of instructions.
    Type: Application
    Filed: June 29, 2017
    Publication date: August 30, 2018
    Inventors: Eric S. Chung, Douglas C. Burger, Jeremy Fowers
  • Publication number: 20180247190
    Abstract: Systems and methods for neural network processing are provided. A method in a system comprising a plurality of nodes interconnected via a network, where each node includes a plurality of on-chip memory blocks and a plurality of compute units, is provided. The method includes upon service activation receiving an N by M matrix of coefficients corresponding to the neural network model. The method includes loading the coefficients corresponding to the neural network model into the plurality of the on-chip memory blocks for processing by the plurality of compute units. The method includes regardless of a utilization of the plurality of the on-chip memory blocks as part of an evaluation of the neural network model, maintaining the coefficients corresponding to the neural network model in the plurality of the on-chip memory blocks until the service is interrupted or the neural network model is modified or replaced.
    Type: Application
    Filed: June 29, 2017
    Publication date: August 30, 2018
    Inventors: Eric S. Chung, Douglas C. Burger, Jeremy Fowers, Kalin Ovtcharov
  • Publication number: 20180247187
    Abstract: Processors and methods for neural network processing are provided. A method in a processor including a pipeline having a matrix vector unit (MVU), a first multifunction unit connected to receive an input from the MVU, a second multifunction unit connected to receive an output from the first multifunction unit, and a third multifunction unit connected to receive an output from the second multifunction unit is provided. The method includes decoding instructions including a first type of instruction for processing by only the MVU and a second type of instruction for processing by only one of the multifunction units. The method includes mapping a first instruction for processing by the matrix vector unit or to any one of the first multifunction unit, the second multifunction unit, or the third multifunction unit depending on whether the first instruction is the first type of instruction or the second type of instruction.
    Type: Application
    Filed: June 29, 2017
    Publication date: August 30, 2018
    Inventors: Eric S. Chung, Douglas C. Burger, Jeremy Fowers