Patents by Inventor Charles Edward Gray
Charles Edward Gray has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11829295Abstract: Techniques are described in which a system having multiple processing units processes a series of work units in a processing pipeline, where some or all of the work units access or manipulate data stored in non-coherent memory. In one example, this disclosure describes a method that includes identifying, prior to completing processing of a first work unit with a processing unit of a processor having multiple processing units, a second work unit that is expected to be processed by the processing unit after the first work unit. The method also includes processing the first work unit, and prefetching, from non-coherent memory, data associated with the second work unit into a second cache segment of the buffer cache, wherein prefetching the data associated with the second work unit occurs concurrently with at least a portion of the processing of the first work unit by the processing unit.Type: GrantFiled: February 27, 2023Date of Patent: November 28, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Wael Noureddine, Jean-Marc Frailong, Felix A. Marti, Charles Edward Gray, Paul Kim
-
Patent number: 11734179Abstract: Techniques are described in which a system having multiple processing units processes a series of work units in a processing pipeline, where some or all of the work units access or manipulate data stored in non-coherent memory. In one example, this disclosure describes a method that includes identifying, prior to completing processing of a first work unit with a processing unit of a processor having multiple processing units, a second work unit that is expected to be processed by the processing unit after the first work unit. The method also includes processing the first work unit, and prefetching, from non-coherent memory, data associated with the second work unit into a second cache segment of the buffer cache, wherein prefetching the data associated with the second work unit occurs concurrently with at least a portion of the processing of the first work unit by the processing unit.Type: GrantFiled: June 28, 2021Date of Patent: August 22, 2023Assignee: Fungible, Inc.Inventors: Wael Noureddine, Jean-Marc Frailong, Felix A. Marti, Charles Edward Gray, Paul Kim
-
Publication number: 20230205702Abstract: Techniques are described in which a system having multiple processing units processes a series of work units in a processing pipeline, where some or all of the work units access or manipulate data stored in non-coherent memory. In one example, this disclosure describes a method that includes identifying, prior to completing processing of a first work unit with a processing unit of a processor having multiple processing units, a second work unit that is expected to be processed by the processing unit after the first work unit. The method also includes processing the first work unit, and prefetching, from non-coherent memory, data associated with the second work unit into a second cache segment of the buffer cache, wherein prefetching the data associated with the second work unit occurs concurrently with at least a portion of the processing of the first work unit by the processing unit.Type: ApplicationFiled: February 27, 2023Publication date: June 29, 2023Inventors: Wael Noureddine, Jean-Marc Frailong, Felix A. Marti, Charles Edward Gray, Paul Kim
-
Publication number: 20210349824Abstract: Techniques are described in which a system having multiple processing units processes a series of work units in a processing pipeline, where some or all of the work units access or manipulate data stored in non-coherent memory. In one example, this disclosure describes a method that includes identifying, prior to completing processing of a first work unit with a processing unit of a processor having multiple processing units, a second work unit that is expected to be processed by the processing unit after the first work unit. The method also includes processing the first work unit, and prefetching, from non-coherent memory, data associated with the second work unit into a second cache segment of the buffer cache, wherein prefetching the data associated with the second work unit occurs concurrently with at least a portion of the processing of the first work unit by the processing unit.Type: ApplicationFiled: June 28, 2021Publication date: November 11, 2021Inventors: Wael Noureddine, Jean-Marc Frailong, Felix A. Marti, Charles Edward Gray, Paul Kim
-
Patent number: 11048634Abstract: Techniques are described in which a system having multiple processing units processes a series of work units in a processing pipeline, where some or all of the work units access or manipulate data stored in non-coherent memory. In one example, this disclosure describes a method that includes identifying, prior to completing processing of a first work unit with a processing unit of a processor having multiple processing units, a second work unit that is expected to be processed by the processing unit after the first work unit. The method also includes processing the first work unit, and prefetching, from non-coherent memory, data associated with the second work unit into a second cache segment of the buffer cache, wherein prefetching the data associated with the second work unit occurs concurrently with at least a portion of the processing of the first work unit by the processing unit.Type: GrantFiled: January 17, 2020Date of Patent: June 29, 2021Assignee: Fungible, Inc.Inventors: Wael Noureddine, Jean-Marc Frailong, Felix A. Marti, Charles Edward Gray, Paul Kim
-
Patent number: 10841245Abstract: Techniques are described in which a device, such as a network device, compute node or storage device, is configured to utilize a work unit (WU) stack data structure in a multiple core processor system to help manage an event driven, run-to-completion programming model of an operating system executed by the multiple core processor system. The techniques may be particularly useful when processing streams of data at high rates. The WU stack may be viewed as a stack of continuation work units used to supplement a typical program stack as an efficient means of moving the program stack between cores. The work unit data structure itself is a building block in the WU stack to compose a processing pipeline and services execution. The WU stack structure carries state, memory, and other information in auxiliary variables.Type: GrantFiled: November 20, 2018Date of Patent: November 17, 2020Assignee: Fungible, Inc.Inventors: Charles Edward Gray, Bertrand Serlet, Felix A. Marti, Wael Noureddine, Pratapa Reddy Vaka
-
Publication number: 20200151101Abstract: Techniques are described in which a system having multiple processing units processes a series of work units in a processing pipeline, where some or all of the work units access or manipulate data stored in non-coherent memory. In one example, this disclosure describes a method that includes identifying, prior to completing processing of a first work unit with a processing unit of a processor having multiple processing units, a second work unit that is expected to be processed by the processing unit after the first work unit. The method also includes processing the first work unit, and prefetching, from non-coherent memory, data associated with the second work unit into a second cache segment of the buffer cache, wherein prefetching the data associated with the second work unit occurs concurrently with at least a portion of the processing of the first work unit by the processing unit.Type: ApplicationFiled: January 17, 2020Publication date: May 14, 2020Inventors: Wael Noureddine, Jean-Marc Frailong, Felix A. Marti, Charles Edward Gray, Paul Kim
-
Patent number: 10540288Abstract: Techniques are described in which a system having multiple processing units processes a series of work units in a processing pipeline, where some or all of the work units access or manipulate data stored in non-coherent memory. In one example, this disclosure describes a method that includes identifying, prior to completing processing of a first work unit with a processing unit of a processor having multiple processing units, a second work unit that is expected to be processed by the processing unit after the first work unit. The method also includes processing the first work unit, and prefetching, from non-coherent memory, data associated with the second work unit into a second cache segment of the buffer cache, wherein prefetching the data associated with the second work unit occurs concurrently with at least a portion of the processing of the first work unit by the processing unit.Type: GrantFiled: April 10, 2018Date of Patent: January 21, 2020Assignee: Fungible, Inc.Inventors: Wael Noureddine, Jean-Marc Frailong, Felix A. Marti, Charles Edward Gray, Paul Kim
-
Publication number: 20190243765Abstract: Techniques are described in which a system having multiple processing units processes a series of work units in a processing pipeline, where some or all of the work units access or manipulate data stored in non-coherent memory. In one example, this disclosure describes a method that includes identifying, prior to completing processing of a first work unit with a processing unit of a processor having multiple processing units, a second work unit that is expected to be processed by the processing unit after the first work unit. The method also includes processing the first work unit, and prefetching, from non-coherent memory, data associated with the second work unit into a second cache segment of the buffer cache, wherein prefetching the data associated with the second work unit occurs concurrently with at least a portion of the processing of the first work unit by the processing unit.Type: ApplicationFiled: April 10, 2018Publication date: August 8, 2019Inventors: Wael Noureddine, Jean-Marc Frailong, Felix A. Marti, Charles Edward Gray, Paul Kim
-
Publication number: 20190158428Abstract: Techniques are described in which a device, such as a network device, compute node or storage device, is configured to utilize a work unit (WU) stack data structure in a multiple core processor system to help manage an event driven, run-to-completion programming model of an operating system executed by the multiple core processor system. The techniques may be particularly useful when processing streams of data at high rates. The WU stack may be viewed as a stack of continuation work units used to supplement a typical program stack as an efficient means of moving the program stack between cores. The work unit data structure itself is a building block in the WU stack to compose a processing pipeline and services execution. The WU stack structure carries state, memory, and other information in auxiliary variables.Type: ApplicationFiled: November 20, 2018Publication date: May 23, 2019Inventors: Charles Edward Gray, Bertrand Serlet, Felix A. Marti, Wael Noureddine, Pratapa Reddy Vaka