Patents by Inventor Ian Buck
Ian Buck has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12257763Abstract: An apparatus for manufacturing an annular or semi-annular composite component having a circumferentially-extending base and a flange, the apparatus including: a loom for weaving a woven preform of fibre reinforcement material for the composite component; a rotatable mandrel configured to receive and draw the woven preform through the loom for weaving; and a guide disposed between the loom and the rotatable mandrel, configured so that a woven preform drawn by the rotatable mandrel along a guide path under tension engages the guide to transition to a flanged profile at a lip of the guide before being received on the mandrel.Type: GrantFiled: June 28, 2023Date of Patent: March 25, 2025Assignee: ROLLS-ROYCE plcInventors: Christopher D Jones, Robert C Backhouse, Ian Buck, Sarvesh Dhiman
-
Publication number: 20240117533Abstract: A woven structure formed by warp and weft tows A, E of fiber reinforcement material includes: a plurality of multi-warp stacks SA, each including a plurality of warp tows A which are in superposition along a longitudinal extent of the warp stack SA, a plurality of multi-weft stacks SE, each including a plurality of weft tows E which are in superposition along the stack SE, wherein one or more multi-warp stacks SA and/or one or more multi-weft stacks SE has an embedded taper structure, including: an embedded tow A1, A2, E1, E2 which has a terminal portion M1, M2 disposed between two locally outermost tows A0, E0 of the respective stack SA SE, the terminal portion M1, M2 terminating at a taper position D1 D2 along the respective path of the stack SA SE; and a method for manufacturing a composite component.Type: ApplicationFiled: September 11, 2023Publication date: April 11, 2024Applicant: ROLLS-ROYCE PLCInventors: Christopher D. JONES, Adam J. BISHOP, Sarvesh DHIMAN, Ian BUCK
-
Publication number: 20240035210Abstract: A method for manufacturing a composite component, including: weaving a multi-layer woven preform from warp and weft tows of fibre-reinforcement material, the woven preform includes: multi-layer weave including: plurality of weft tow layers; plurality of laterally-adjacent stacks extending along the longitudinal direction, primary portion having longitudinal extent along woven preform, the primary portion having one or more edge regions each defining respective lateral side of primary portion; wherein for the or each edge region: the multi-layer weave defines at least the edge region; plurality of stacks in the edge region are binding stacks in which one or more warp tows are interlaced to bind a respective plurality of weft tow layers; weave property differs between binding stacks in the edge region to reduce a thickness of the edge region towards respective lateral side. Also disclosed herein is a woven structure, formed by warp and weft tows of fibre reinforcement material.Type: ApplicationFiled: July 12, 2023Publication date: February 1, 2024Applicant: ROLLS-ROYCE plcInventors: Christopher D. JONES, Adam J. BISHOP, Richard HALL, Ian BUCK, Sarvesh DHIMAN
-
Publication number: 20240025131Abstract: A woven composite component for an aerospace structure or machine, comprising: a compound member extending from a junction with a feeder member, a noodle element extending in a weft direction through the junction. The feeder member comprises first and second feeder portions either side of the junction, each feeder portion comprising warp tows extending towards the junction. There is a compound set of warp tows extending from the first and second feeder portions, each turning at the junction to define warp tows for the compound member. There is a crossing set of warp tows belonging to the compound set, the crossing set comprising warp tows from the first feeder portion and warp tows from the second portion which cross each other at the junction to pass around the noodle element. There is also disclosed a method of manufacturing a preform for a woven composite component.Type: ApplicationFiled: July 12, 2023Publication date: January 25, 2024Applicant: ROLLS-ROYCE PLCInventors: Christopher D. JONES, Robert C. BACKHOUSE, Adam J. BISHOP, Ian BUCK, Sarvesh DHIMAN
-
Publication number: 20240017475Abstract: An apparatus for manufacturing an annular or semi-annular composite component having a circumferentially-extending base and a flange, the apparatus including: a loom for weaving a woven preform of fibre reinforcement material for the composite component; a rotatable mandrel configured to receive and draw the woven preform through the loom for weaving; and a guide disposed between the loom and the rotatable mandrel, configured so that a woven preform drawn by the rotatable mandrel along a guide path under tension engages the guide to transition to a flanged profile at a lip of the guide before being received on the mandrel.Type: ApplicationFiled: June 28, 2023Publication date: January 18, 2024Applicant: ROLLS-ROYCE PLCInventors: Christopher D. JONES, Robert C. BACKHOUSE, Ian BUCK, Sarvesh DHIMAN
-
Publication number: 20240018702Abstract: A manufacturing method includes: in a loom, weaving a woven reinforcing fibre fabric including a plurality of reinforcing fibre tows and polymeric material; and heating the woven reinforcing fibre fabric as it exits the loom to cause the polymeric material to melt and/or cure.Type: ApplicationFiled: June 28, 2023Publication date: January 18, 2024Applicant: ROLLS-ROYCE PLCInventors: Christopher D JONES, Ian BUCK, Sarvesh DHIMAN
-
Publication number: 20240017503Abstract: A manufacturing method comprises: providing a woven fabric comprising a plurality of reinforcing fibre tows and a plurality of thermoplastic polymer yarns woven together; and moulding the woven fabric in a heated mould to form a preform for a composite component.Type: ApplicationFiled: June 28, 2023Publication date: January 18, 2024Applicant: ROLLS-ROYCE PLCInventors: Christopher D. JONES, Ian BUCK, Sarvesh DHIMAN
-
Patent number: 9542192Abstract: A method for executing an application program using streams. A device driver receives a first command within an application program and parses the first command to identify a first stream token that is associated with a first stream. The device driver checks a memory location associated with the first stream for a first semaphore, and determines whether the first semaphore has been released. Once the first semaphore has been released, a second command within the application program is executed. Advantageously, embodiments of the invention provide a technique for developers to take advantage of the parallel execution capabilities of a GPU.Type: GrantFiled: August 15, 2008Date of Patent: January 10, 2017Assignee: NVIDIA CorporationInventors: Nicholas Patrick Wilt, Ian Buck, Philip Cuadra
-
Patent number: 8656394Abstract: A method for executing an application program using streams. A device driver receives a first command within an application program and parses the first command to identify a first stream token that is associated with a first stream. The device driver checks a memory location associated with the first stream for a first semaphore, and determines whether the first semaphore has been released. Once the first semaphore has been released, a second command within the application program is executed. Advantageously, embodiments of the invention provide a technique for developers to take advantage of the parallel execution capabilities of a GPU.Type: GrantFiled: August 15, 2008Date of Patent: February 18, 2014Assignee: Nvidia CorporationInventors: Nicholas Patrick Wilt, Ian Buck, Philip Cuadra
-
Publication number: 20120066668Abstract: A general-purpose programming environment allows users to program a GPU as a general-purpose computation engine using familiar C/C++ programming constructs. Users may use declaration specifiers to identify which portions of a program are to be compiled for a CPU or a GPU. Specifically, functions, objects and variables may be specified for GPU binary compilation using declaration specifiers. A compiler separates the GPU binary code and the CPU binary code in a source file using the declaration specifiers. The location of objects and variables in different memory locations in the system may be identified using the declaration specifiers. CTA threading information is also provided for the GPU to support parallel processing.Type: ApplicationFiled: July 11, 2011Publication date: March 15, 2012Applicant: NVIDIA CorporationInventors: Ian Buck, Bastiaan Aarts
-
Publication number: 20080109795Abstract: A general-purpose programming environment allows users to program a GPU as a general-purpose computation engine using familiar C/C++ programming constructs. Users may use declaration specifiers to identify which portions of a program are to be compiled for a CPU or a GPU. Specifically, functions, objects and variables may be specified for GPU binary compilation using declaration specifiers. A compiler separates the GPU binary code and the CPU binary code in a source file using the declaration specifiers. The location of objects and variables in different memory locations in the system may be identified using the declaration specifiers. CTA threading information is also provided for the GPU to support parallel processing.Type: ApplicationFiled: November 2, 2006Publication date: May 8, 2008Applicant: NVIDIA CorporationInventors: Ian Buck, Bastiaan Aarts
-
Publication number: 20070211064Abstract: A system and method for processing machine learning techniques (such as neural networks) and other non-graphics applications using a graphics processing unit (GPU) to accelerate and optimize the processing. The system and method transfers an architecture that can be used for a wide variety of machine learning techniques from the CPU to the GPU. The transfer of processing to the GPU is accomplished using several novel techniques that overcome the limitations and work well within the framework of the GPU architecture. With these limitations overcome, machine learning techniques are particularly well suited for processing on the GPU because the GPU is typically much more powerful than the typical CPU. Moreover, similar to graphics processing, processing of machine learning techniques involves problems with solving non-trivial solutions and large amounts of data.Type: ApplicationFiled: May 14, 2007Publication date: September 13, 2007Applicant: Microsoft CorporationInventors: Ian Buck, Patrice Simard, David Steinkraus
-
Publication number: 20050197977Abstract: A system and method for optimizing the performance of a graphics processing unit (GPU) for processing and execution of general matrix operations such that the operations are accelerated and optimized. The system and method describes the layouts of operands and results in graphics memory, as well as partitioning the processes into a sequence of passes through a macro step. Specifically, operands are placed in memory in a pattern, results are written into memory in a pattern appropriate for use as operands in a later pass, data sets are partitioned to insure that each pass fits into fixed sized memory, and the execution model incorporates generally reusable macro steps for use in multiple passes. These features enable greater efficiency and speed in processing and executing general matrix operations.Type: ApplicationFiled: June 25, 2004Publication date: September 8, 2005Applicant: Microsoft CorporationInventors: Ian Buck, David Steinkraus, Richard Szeliski
-
Publication number: 20050125369Abstract: A system and method for processing machine learning techniques (such as neural networks) and other non-graphics applications using a graphics processing unit (GPU) to accelerate and optimize the processing. The system and method transfers an architecture that can be used for a wide variety of machine learning techniques from the CPU to the GPU. The transfer of processing to the GPU is accomplished using several novel techniques that overcome the limitations and work well within the framework of the GPU architecture. With these limitations overcome, machine learning techniques are particularly well suited for processing on the GPU because the GPU is typically much more powerful than the typical CPU. Moreover, similar to graphics processing, processing of machine learning techniques involves problems with solving non-trivial solutions and large amounts of data.Type: ApplicationFiled: April 30, 2004Publication date: June 9, 2005Applicant: Microsoft CorporationInventors: Ian Buck, Patrice Simard, David Steinkraus