Patents by Inventor Henry Mitchel

Henry Mitchel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190065290
    Abstract: Technologies for providing efficient reprovisioning in an accelerator device include an accelerator sled. The accelerator sled includes a memory and an accelerator device coupled to the memory. The accelerator device is to configure itself with a first bit stream to establish a first kernel, execute the first kernel to produce output data, write the output data to the memory, configure itself with a second bit stream to establish a second kernel, and execute the second kernel with the output data in the memory used as input data to the second kernel. Other embodiments are also described and claimed.
    Type: Application
    Filed: December 30, 2017
    Publication date: February 28, 2019
    Inventors: Evan Custodio, Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Joe Grecco, Henry MItchel
  • Publication number: 20180150334
    Abstract: Technologies for providing accelerated functions as a service in a disaggregated architecture include a compute device that is to receive a request for an accelerated task. The task is associated with a kernel usable by an accelerator sled communicatively coupled to the compute device to execute the task. The compute device is further to determine, in response to the request and with a database indicative of kernels and associated accelerator sleds, an accelerator sled that includes an accelerator device configured with the kernel associated with the request. Additionally, the compute device is to assign the task to the determined accelerator sled for execution.
    Type: Application
    Filed: September 29, 2017
    Publication date: May 31, 2018
    Inventors: Francesc Guim Bernat, Evan Custodio, Susanne M. Balle, Joe Grecco, Henry MItchel, Rahul Khanna, Slawomir Putyrski, Sujoy Sen, Paul Dormitzer
  • Publication number: 20180150298
    Abstract: Technologies for offloading acceleration task scheduling operations to accelerator sleds include a compute device to receive a request from a compute sled to accelerate the execution of a job, which includes a set of tasks. The compute device is also to analyze the request to generate metadata indicative of the tasks within the job, a type of acceleration associated with each task, and a data dependency between the tasks. Additionally the compute device is to send an availability request, including the metadata, to one or more micro-orchestrators of one or more accelerator sleds communicatively coupled to the compute device. The compute device is further to receive availability data from the one or more micro-orchestrators, indicative of which of the tasks the micro-orchestrator has accepted for acceleration on the associated accelerator sled. Additionally, the compute device is to assign the tasks to the one or more micro-orchestrators as a function of the availability data.
    Type: Application
    Filed: September 30, 2017
    Publication date: May 31, 2018
    Inventors: Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Joe Grecco, Henry MItchel, Rahul Khanna, Evan Custodio
  • Publication number: 20180150330
    Abstract: A compute device to manage workflow to disaggregated computing resources is provided. The compute device comprises a compute engine receive a workload processing request, the workload processing request defined by at least one request parameter, determine at least one accelerator device capable of processing a workload in accordance with the at least one request parameter, transmit a workload to the at least one accelerator device, receive a work product produced by the at least one accelerator device from the workload, and provide the work product to an application.
    Type: Application
    Filed: September 30, 2017
    Publication date: May 31, 2018
    Inventors: Francesc Guim Bernat, Evan Custodio, Susanne M. Balle, Joe Grecco, Henry Mitchel, Slawomir Putyrski
  • Publication number: 20180150299
    Abstract: Technologies for dividing work across one or more accelerator devices include a compute device. The compute device is to determine a configuration of each of multiple accelerator devices of the compute device, receive a job to be accelerated from a requester device remote from the compute device, and divide the job into multiple tasks for a parallelization of the multiple tasks among the one or more accelerator devices, as a function of a job analysis of the job and the configuration of each accelerator device. The compute engine is further to schedule the tasks to the one or more accelerator devices based on the job analysis and execute the tasks on the one or more accelerator devices for the parallelization of the multiple tasks to obtain an output of the job.
    Type: Application
    Filed: September 30, 2017
    Publication date: May 31, 2018
    Inventors: Susanne M. Balle, Francesc Guim Bernat, Slawomir Putyrski, Joe Grecco, Henry Mitchel, Evan Custodio, Rahul Khanna, Sujoy Sen
  • Publication number: 20180150391
    Abstract: Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.
    Type: Application
    Filed: September 30, 2017
    Publication date: May 31, 2018
    Inventors: Henry Mitchel, Joe Grecco, Sujoy Sen, Francesc Guim Bernat, Susanne M. Balle, Evan Custodio, Paul Dormitzer
  • Patent number: 7668139
    Abstract: Methods and devices are disclosed for communicating in a wireless network using multi-protocol label switching (MPLS). A network service node is configured to send identical packets substantially simultaneously to each of a serving network access station and one or more target network access stations via two or more respective MPLS tunnels in response to a handoff trigger message. Additional embodiments and variations are also disclosed.
    Type: Grant
    Filed: March 23, 2005
    Date of Patent: February 23, 2010
    Assignee: Intel Corporation
    Inventors: Henry Mitchel, James (JR-Shian) Tsai, Gerald Lebizay, Prakash Iyer, Asher Altman, Farid Adrangi, Alan Stone
  • Patent number: 7412536
    Abstract: A method and system for a network node for attachment to switch fabrics is described. The system includes an access unit to provide access to communications from an external network, a classification element to label received packets with information identifying an associated flow and queue, a mapping element to place the packets into one of a plurality of queues based on the label identifiers, a scheduler to schedule packets in the queues for transmission, and an encapsulation element to encapsulate the scheduled packets into uniform size frames. The uniform size frames may then be transmitted to a next destination through a switch fabric.
    Type: Grant
    Filed: June 27, 2003
    Date of Patent: August 12, 2008
    Assignee: Intel Corporation
    Inventors: Neal C. Oliver, David Gish, Gerald Lebizay, Henry Mitchel, Brian Peebles, Alan Stone
  • Publication number: 20070070905
    Abstract: Methods and systems for communicating in a wireless network include determining an average size (M) of packets allocated for a particular subscriber station in an air frame, and adjusting a size of packets for that particular subscriber station to be packed into one or more subsequent air frames to be substantially equal to size M. The method may further include arranging incoming data segments and fragmenting one or more of the incoming data segments into the packets of approximately size M. In one implementation, the incoming data segments may be media access controller (MAC) service data units (SDUs) and the packets may be MAC protocol data units (PDUs). Various specific embodiments and variations are also disclosed.
    Type: Application
    Filed: September 26, 2005
    Publication date: March 29, 2007
    Inventors: Neal Oliver, Henry Mitchel
  • Publication number: 20060215607
    Abstract: Methods and devices are disclosed for communicating in a wireless network using multi-protocol label switching (MPLS). A network service node is configured to send identical packets substantially simultaneously to each of a serving network access station and one or more target network access stations via two or more respective MPLS tunnels in response to a handoff trigger message. Additional embodiments and variations are also disclosed.
    Type: Application
    Filed: March 23, 2005
    Publication date: September 28, 2006
    Inventors: Henry Mitchel, James Tsai, Gerarld Lebizay, Prakash Iyer, Asher Altman, Farid Adrangi, Alan Stone
  • Publication number: 20060215708
    Abstract: A method of operation in a communications node is disclosed. The method of operation includes combining a first of a plurality of latency sensitive signaling/control traffic with a first of a plurality of latency sensitive data, and transmitting the first latency sensitive signaling/control traffic in combination with the first latency sensitive data. Embodiments of the present invention include but are not limited to communications nodes and devices, subsystems, and systems equipped to operate in the above described manner.
    Type: Application
    Filed: March 24, 2005
    Publication date: September 28, 2006
    Inventors: Gerald Lebizay, Henry Mitchel
  • Publication number: 20040264472
    Abstract: A method and system for open-loop congestion control in a system fabric is described. The method includes determining which traffic class each received network packet belongs, determining a path to be taken by each packet through a switch fabric, classifying each packet into one of a plurality of flow bundles based on the packet's destination and path through the switch fabric, mapping each packet into one of a plurality of queues to await transmission based on the flow bundle to which the packet has been classified, and scheduling the packets in the queues for transmission to a next destination through the switch fabric.
    Type: Application
    Filed: June 27, 2003
    Publication date: December 30, 2004
    Inventors: Neal C. Oliver, David W. Gish, Gerald Lebizay, Henry Mitchel, Brian Peebles, Alan Stone
  • Publication number: 20040267948
    Abstract: A method and system for a network node for attachment to switch fabrics is described. The system includes an access unit to provide access to communications from an external network, a classification element to label received packets with information identifying an associated flow and queue, a mapping element to place the packets into one of a plurality of queues based on the label identifiers, a scheduler to schedule packets in the queues for transmission, and an encapsulation element to encapsulate the scheduled packets into uniform size frames. The uniform size frames may then be transmitted to a next destination through a switch fabric.
    Type: Application
    Filed: June 27, 2003
    Publication date: December 30, 2004
    Inventors: Neal C. Oliver, David Gish, Gerald Lebizay, Henry Mitchel, Brian Peebles, Alan Stone