Patents by Inventor Charles Edward Gray

Charles Edward Gray has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11829295
    Abstract: Techniques are described in which a system having multiple processing units processes a series of work units in a processing pipeline, where some or all of the work units access or manipulate data stored in non-coherent memory. In one example, this disclosure describes a method that includes identifying, prior to completing processing of a first work unit with a processing unit of a processor having multiple processing units, a second work unit that is expected to be processed by the processing unit after the first work unit. The method also includes processing the first work unit, and prefetching, from non-coherent memory, data associated with the second work unit into a second cache segment of the buffer cache, wherein prefetching the data associated with the second work unit occurs concurrently with at least a portion of the processing of the first work unit by the processing unit.
    Type: Grant
    Filed: February 27, 2023
    Date of Patent: November 28, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Wael Noureddine, Jean-Marc Frailong, Felix A. Marti, Charles Edward Gray, Paul Kim
  • Patent number: 11734179
    Abstract: Techniques are described in which a system having multiple processing units processes a series of work units in a processing pipeline, where some or all of the work units access or manipulate data stored in non-coherent memory. In one example, this disclosure describes a method that includes identifying, prior to completing processing of a first work unit with a processing unit of a processor having multiple processing units, a second work unit that is expected to be processed by the processing unit after the first work unit. The method also includes processing the first work unit, and prefetching, from non-coherent memory, data associated with the second work unit into a second cache segment of the buffer cache, wherein prefetching the data associated with the second work unit occurs concurrently with at least a portion of the processing of the first work unit by the processing unit.
    Type: Grant
    Filed: June 28, 2021
    Date of Patent: August 22, 2023
    Assignee: Fungible, Inc.
    Inventors: Wael Noureddine, Jean-Marc Frailong, Felix A. Marti, Charles Edward Gray, Paul Kim
  • Publication number: 20230205702
    Abstract: Techniques are described in which a system having multiple processing units processes a series of work units in a processing pipeline, where some or all of the work units access or manipulate data stored in non-coherent memory. In one example, this disclosure describes a method that includes identifying, prior to completing processing of a first work unit with a processing unit of a processor having multiple processing units, a second work unit that is expected to be processed by the processing unit after the first work unit. The method also includes processing the first work unit, and prefetching, from non-coherent memory, data associated with the second work unit into a second cache segment of the buffer cache, wherein prefetching the data associated with the second work unit occurs concurrently with at least a portion of the processing of the first work unit by the processing unit.
    Type: Application
    Filed: February 27, 2023
    Publication date: June 29, 2023
    Inventors: Wael Noureddine, Jean-Marc Frailong, Felix A. Marti, Charles Edward Gray, Paul Kim
  • Publication number: 20210349824
    Abstract: Techniques are described in which a system having multiple processing units processes a series of work units in a processing pipeline, where some or all of the work units access or manipulate data stored in non-coherent memory. In one example, this disclosure describes a method that includes identifying, prior to completing processing of a first work unit with a processing unit of a processor having multiple processing units, a second work unit that is expected to be processed by the processing unit after the first work unit. The method also includes processing the first work unit, and prefetching, from non-coherent memory, data associated with the second work unit into a second cache segment of the buffer cache, wherein prefetching the data associated with the second work unit occurs concurrently with at least a portion of the processing of the first work unit by the processing unit.
    Type: Application
    Filed: June 28, 2021
    Publication date: November 11, 2021
    Inventors: Wael Noureddine, Jean-Marc Frailong, Felix A. Marti, Charles Edward Gray, Paul Kim
  • Patent number: 11048634
    Abstract: Techniques are described in which a system having multiple processing units processes a series of work units in a processing pipeline, where some or all of the work units access or manipulate data stored in non-coherent memory. In one example, this disclosure describes a method that includes identifying, prior to completing processing of a first work unit with a processing unit of a processor having multiple processing units, a second work unit that is expected to be processed by the processing unit after the first work unit. The method also includes processing the first work unit, and prefetching, from non-coherent memory, data associated with the second work unit into a second cache segment of the buffer cache, wherein prefetching the data associated with the second work unit occurs concurrently with at least a portion of the processing of the first work unit by the processing unit.
    Type: Grant
    Filed: January 17, 2020
    Date of Patent: June 29, 2021
    Assignee: Fungible, Inc.
    Inventors: Wael Noureddine, Jean-Marc Frailong, Felix A. Marti, Charles Edward Gray, Paul Kim
  • Patent number: 10841245
    Abstract: Techniques are described in which a device, such as a network device, compute node or storage device, is configured to utilize a work unit (WU) stack data structure in a multiple core processor system to help manage an event driven, run-to-completion programming model of an operating system executed by the multiple core processor system. The techniques may be particularly useful when processing streams of data at high rates. The WU stack may be viewed as a stack of continuation work units used to supplement a typical program stack as an efficient means of moving the program stack between cores. The work unit data structure itself is a building block in the WU stack to compose a processing pipeline and services execution. The WU stack structure carries state, memory, and other information in auxiliary variables.
    Type: Grant
    Filed: November 20, 2018
    Date of Patent: November 17, 2020
    Assignee: Fungible, Inc.
    Inventors: Charles Edward Gray, Bertrand Serlet, Felix A. Marti, Wael Noureddine, Pratapa Reddy Vaka
  • Publication number: 20200151101
    Abstract: Techniques are described in which a system having multiple processing units processes a series of work units in a processing pipeline, where some or all of the work units access or manipulate data stored in non-coherent memory. In one example, this disclosure describes a method that includes identifying, prior to completing processing of a first work unit with a processing unit of a processor having multiple processing units, a second work unit that is expected to be processed by the processing unit after the first work unit. The method also includes processing the first work unit, and prefetching, from non-coherent memory, data associated with the second work unit into a second cache segment of the buffer cache, wherein prefetching the data associated with the second work unit occurs concurrently with at least a portion of the processing of the first work unit by the processing unit.
    Type: Application
    Filed: January 17, 2020
    Publication date: May 14, 2020
    Inventors: Wael Noureddine, Jean-Marc Frailong, Felix A. Marti, Charles Edward Gray, Paul Kim
  • Patent number: 10540288
    Abstract: Techniques are described in which a system having multiple processing units processes a series of work units in a processing pipeline, where some or all of the work units access or manipulate data stored in non-coherent memory. In one example, this disclosure describes a method that includes identifying, prior to completing processing of a first work unit with a processing unit of a processor having multiple processing units, a second work unit that is expected to be processed by the processing unit after the first work unit. The method also includes processing the first work unit, and prefetching, from non-coherent memory, data associated with the second work unit into a second cache segment of the buffer cache, wherein prefetching the data associated with the second work unit occurs concurrently with at least a portion of the processing of the first work unit by the processing unit.
    Type: Grant
    Filed: April 10, 2018
    Date of Patent: January 21, 2020
    Assignee: Fungible, Inc.
    Inventors: Wael Noureddine, Jean-Marc Frailong, Felix A. Marti, Charles Edward Gray, Paul Kim
  • Publication number: 20190243765
    Abstract: Techniques are described in which a system having multiple processing units processes a series of work units in a processing pipeline, where some or all of the work units access or manipulate data stored in non-coherent memory. In one example, this disclosure describes a method that includes identifying, prior to completing processing of a first work unit with a processing unit of a processor having multiple processing units, a second work unit that is expected to be processed by the processing unit after the first work unit. The method also includes processing the first work unit, and prefetching, from non-coherent memory, data associated with the second work unit into a second cache segment of the buffer cache, wherein prefetching the data associated with the second work unit occurs concurrently with at least a portion of the processing of the first work unit by the processing unit.
    Type: Application
    Filed: April 10, 2018
    Publication date: August 8, 2019
    Inventors: Wael Noureddine, Jean-Marc Frailong, Felix A. Marti, Charles Edward Gray, Paul Kim
  • Publication number: 20190158428
    Abstract: Techniques are described in which a device, such as a network device, compute node or storage device, is configured to utilize a work unit (WU) stack data structure in a multiple core processor system to help manage an event driven, run-to-completion programming model of an operating system executed by the multiple core processor system. The techniques may be particularly useful when processing streams of data at high rates. The WU stack may be viewed as a stack of continuation work units used to supplement a typical program stack as an efficient means of moving the program stack between cores. The work unit data structure itself is a building block in the WU stack to compose a processing pipeline and services execution. The WU stack structure carries state, memory, and other information in auxiliary variables.
    Type: Application
    Filed: November 20, 2018
    Publication date: May 23, 2019
    Inventors: Charles Edward Gray, Bertrand Serlet, Felix A. Marti, Wael Noureddine, Pratapa Reddy Vaka