Patents by Inventor Ian A. Buck
Ian A. Buck has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240117533Abstract: A woven structure formed by warp and weft tows A, E of fiber reinforcement material includes: a plurality of multi-warp stacks SA, each including a plurality of warp tows A which are in superposition along a longitudinal extent of the warp stack SA, a plurality of multi-weft stacks SE, each including a plurality of weft tows E which are in superposition along the stack SE, wherein one or more multi-warp stacks SA and/or one or more multi-weft stacks SE has an embedded taper structure, including: an embedded tow A1, A2, E1, E2 which has a terminal portion M1, M2 disposed between two locally outermost tows A0, E0 of the respective stack SA SE, the terminal portion M1, M2 terminating at a taper position D1 D2 along the respective path of the stack SA SE; and a method for manufacturing a composite component.Type: ApplicationFiled: September 11, 2023Publication date: April 11, 2024Applicant: ROLLS-ROYCE PLCInventors: Christopher D. JONES, Adam J. BISHOP, Sarvesh DHIMAN, Ian BUCK
-
Publication number: 20240035210Abstract: A method for manufacturing a composite component, including: weaving a multi-layer woven preform from warp and weft tows of fibre-reinforcement material, the woven preform includes: multi-layer weave including: plurality of weft tow layers; plurality of laterally-adjacent stacks extending along the longitudinal direction, primary portion having longitudinal extent along woven preform, the primary portion having one or more edge regions each defining respective lateral side of primary portion; wherein for the or each edge region: the multi-layer weave defines at least the edge region; plurality of stacks in the edge region are binding stacks in which one or more warp tows are interlaced to bind a respective plurality of weft tow layers; weave property differs between binding stacks in the edge region to reduce a thickness of the edge region towards respective lateral side. Also disclosed herein is a woven structure, formed by warp and weft tows of fibre reinforcement material.Type: ApplicationFiled: July 12, 2023Publication date: February 1, 2024Applicant: ROLLS-ROYCE plcInventors: Christopher D. JONES, Adam J. BISHOP, Richard HALL, Ian BUCK, Sarvesh DHIMAN
-
Publication number: 20240025131Abstract: A woven composite component for an aerospace structure or machine, comprising: a compound member extending from a junction with a feeder member, a noodle element extending in a weft direction through the junction. The feeder member comprises first and second feeder portions either side of the junction, each feeder portion comprising warp tows extending towards the junction. There is a compound set of warp tows extending from the first and second feeder portions, each turning at the junction to define warp tows for the compound member. There is a crossing set of warp tows belonging to the compound set, the crossing set comprising warp tows from the first feeder portion and warp tows from the second portion which cross each other at the junction to pass around the noodle element. There is also disclosed a method of manufacturing a preform for a woven composite component.Type: ApplicationFiled: July 12, 2023Publication date: January 25, 2024Applicant: ROLLS-ROYCE PLCInventors: Christopher D. JONES, Robert C. BACKHOUSE, Adam J. BISHOP, Ian BUCK, Sarvesh DHIMAN
-
Publication number: 20240017503Abstract: A manufacturing method comprises: providing a woven fabric comprising a plurality of reinforcing fibre tows and a plurality of thermoplastic polymer yarns woven together; and moulding the woven fabric in a heated mould to form a preform for a composite component.Type: ApplicationFiled: June 28, 2023Publication date: January 18, 2024Applicant: ROLLS-ROYCE PLCInventors: Christopher D. JONES, Ian BUCK, Sarvesh DHIMAN
-
Publication number: 20240017475Abstract: An apparatus for manufacturing an annular or semi-annular composite component having a circumferentially-extending base and a flange, the apparatus including: a loom for weaving a woven preform of fibre reinforcement material for the composite component; a rotatable mandrel configured to receive and draw the woven preform through the loom for weaving; and a guide disposed between the loom and the rotatable mandrel, configured so that a woven preform drawn by the rotatable mandrel along a guide path under tension engages the guide to transition to a flanged profile at a lip of the guide before being received on the mandrel.Type: ApplicationFiled: June 28, 2023Publication date: January 18, 2024Applicant: ROLLS-ROYCE PLCInventors: Christopher D. JONES, Robert C. BACKHOUSE, Ian BUCK, Sarvesh DHIMAN
-
Publication number: 20240018702Abstract: A manufacturing method includes: in a loom, weaving a woven reinforcing fibre fabric including a plurality of reinforcing fibre tows and polymeric material; and heating the woven reinforcing fibre fabric as it exits the loom to cause the polymeric material to melt and/or cure.Type: ApplicationFiled: June 28, 2023Publication date: January 18, 2024Applicant: ROLLS-ROYCE PLCInventors: Christopher D JONES, Ian BUCK, Sarvesh DHIMAN
-
Patent number: 9542192Abstract: A method for executing an application program using streams. A device driver receives a first command within an application program and parses the first command to identify a first stream token that is associated with a first stream. The device driver checks a memory location associated with the first stream for a first semaphore, and determines whether the first semaphore has been released. Once the first semaphore has been released, a second command within the application program is executed. Advantageously, embodiments of the invention provide a technique for developers to take advantage of the parallel execution capabilities of a GPU.Type: GrantFiled: August 15, 2008Date of Patent: January 10, 2017Assignee: NVIDIA CorporationInventors: Nicholas Patrick Wilt, Ian Buck, Philip Cuadra
-
Patent number: 9317290Abstract: Circuits, methods, and apparatus that provide parallel execution relationships to be included in a function call or other appropriate portion of a command or instruction in a sequential programming language. One example provides a token-based method of expressing parallel execution relationships. Each process that can be executed in parallel is given a separate token. Later processes that depend on earlier processes wait to receive the appropriate token before being executed. In another example, counters are used in place to tokens to determine when a process is completed. Each function is a number of individual functions or threads, where each thread performs the same operation on a different piece of data. A counter is used to track the number of threads that have been executed. When each thread in the function has been executed, a later function that relies on data generated by the earlier function may be executed.Type: GrantFiled: January 7, 2013Date of Patent: April 19, 2016Assignee: NVIDIA CorporationInventors: Ian A. Buck, Bastiaan Aarts
-
Patent number: 8656394Abstract: A method for executing an application program using streams. A device driver receives a first command within an application program and parses the first command to identify a first stream token that is associated with a first stream. The device driver checks a memory location associated with the first stream for a first semaphore, and determines whether the first semaphore has been released. Once the first semaphore has been released, a second command within the application program is executed. Advantageously, embodiments of the invention provide a technique for developers to take advantage of the parallel execution capabilities of a GPU.Type: GrantFiled: August 15, 2008Date of Patent: February 18, 2014Assignee: Nvidia CorporationInventors: Nicholas Patrick Wilt, Ian Buck, Philip Cuadra
-
Publication number: 20130283015Abstract: Circuits, methods, and apparatus that provide parallel execution relationships to be included in a function call or other appropriate portion of a command or instruction in a sequential programming language. One example provides a token-based method of expressing parallel execution relationships. Each process that can be executed in parallel is given a separate token. Later processes that depend on earlier processes wait to receive the appropriate token before being executed. In another example, counters are used in place to tokens to determine when a process is completed. Each function is a number of individual functions or threads, where each thread performs the same operation on a different piece of data. A counter is used to track the number of threads that have been executed. When each thread in the function has been executed, a later function that relies on data generated by the earlier function may be executed.Type: ApplicationFiled: January 7, 2013Publication date: October 24, 2013Inventors: Ian A. Buck, Bastiaan Aarts
-
Patent number: 8539516Abstract: One embodiment of the present invention sets forth a method for sharing graphics objects between a compute unified device architecture (CUDA) application programming interface (API) and a graphics API. The CUDA API includes calls used to alias graphics objects allocated by the graphics API and, subsequently, synchronize accesses to the graphics objects. When an application program emits a “register” call that targets a particular graphics object, the CUDA API ensures that the graphics object is in the device memory, and maps the graphics object into the CUDA address space. Subsequently, when the application program emits “map” and “unmap” calls, the CUDA API respectively enables and disables accesses to the graphics object through the CUDA API. Further, the CUDA API uses semaphores to synchronize accesses to the shared graphics object. Finally, when the application program emits an “unregister” call, the CUDA API configures the computing system to disregard interoperability constraints.Type: GrantFiled: February 14, 2008Date of Patent: September 17, 2013Assignee: NVIDIA CorporationInventors: Nicholas Patrick Wilt, Ian A. Buck, Nolan David Goodnight
-
Patent number: 8402229Abstract: One embodiment of the present invention sets forth a method for sharing graphics objects between a compute unified device architecture (CUDA) application programming interface (API) and a graphics API. The CUDA API includes calls used to alias graphics objects allocated by the graphics API and, subsequently, synchronize accesses to the graphics objects. When an application program emits a “register” call that targets a particular graphics object, the CUDA API ensures that the graphics object is in the device memory, and maps the graphics object into the CUDA address space. Subsequently, when the application program emits “map” and “unmap” calls, the CUDA API respectively enables and disables accesses to the graphics object through the CUDA API. Further, the CUDA API uses semaphores to synchronize accesses to the shared graphics object. Finally, when the application program emits an “unregister” call, the CUDA API configures the computing system to disregard interoperability constraints.Type: GrantFiled: February 14, 2008Date of Patent: March 19, 2013Assignee: NVIDIA CorporationInventors: Nicholas Patrick Wilt, Ian A. Buck, Nolan David Goodnight
-
System and method for representing and managing a multi-architecure co-processor application program
Patent number: 8347310Abstract: One embodiment of the present invention sets forth a technique for representing and managing a multi-architecture co-processor application program. Source code for co-processor functions is compiled in two stages. The first stage incorporates a majority of the computationally intensive processing steps associated with co-processor code compilation. The first stage generates virtual assembly code from the source code. The second stage generates co-processor machine code from the virtual assembly. Both the virtual assembly and co-processor machine code may be included within the co-processor enabled application program. A co-processor driver uses a description of the currently available co-processor to select between virtual assembly and co-processor machine code. If the virtual assembly code is selected, then the co-processor driver compiles the virtual assembly into machine code for the current co-processor.Type: GrantFiled: November 12, 2007Date of Patent: January 1, 2013Assignee: NVIDIA CorporationInventors: Julius Vanderspek, Nicholas Patrick Wilt, Jayant Kolhe, Ian A. Buck, Bastiaan Aarts -
Patent number: 8321849Abstract: A virtual architecture and instruction set support explicit parallel-thread computing. The virtual architecture defines a virtual processor that supports concurrent execution of multiple virtual threads with multiple levels of data sharing and coordination (e.g., synchronization) between different virtual threads, as well as a virtual execution driver that controls the virtual processor. A virtual instruction set architecture for the virtual processor is used to define behavior of a virtual thread and includes instructions related to parallel thread behavior, e.g., data sharing and synchronization. Using the virtual platform, programmers can develop application programs in which virtual threads execute concurrently to process data; virtual translators and drivers adapt the application code to particular hardware on which it is to execute, transparently to the programmer.Type: GrantFiled: January 26, 2007Date of Patent: November 27, 2012Assignee: NVIDIA CorporationInventors: John R. Nickolls, Henry P. Moreton, Lars S. Nyland, Ian A. Buck, Richard C. Johnson, Robert S. Glanville, Jayant B. Kolhe
-
Patent number: 8281294Abstract: One embodiment of the present invention sets forth a technique for representing and managing a multi-architecture co-processor application program. Source code for co-processor functions is compiled in two stages. The first stage incorporates a majority of the computationally intensive processing steps associated with co-processor code compilation. The first stage generates virtual assembly code from the source code. The second stage generates co-processor machine code from the virtual assembly. Both the virtual assembly and co-processor machine code may be included within the co-processor enabled application program. A co-processor driver uses a description of the currently available co-processor to select between virtual assembly and co-processor machine code. If the virtual assembly code is selected, then the co-processor driver compiles the virtual assembly into machine code for the current co-processor.Type: GrantFiled: November 12, 2007Date of Patent: October 2, 2012Assignee: NVIDIA CorporationInventors: Julius Vanderspek, Nicholas Patrick Wilt, Jayant Kolhe, Ian A. Buck, Bastiaan Aarts
-
Patent number: 8276132Abstract: One embodiment of the present invention sets forth a technique for representing and managing a multi-architecture co-processor application program. Source code for co-processor functions is compiled in two stages. The first stage incorporates a majority of the computationally intensive processing steps associated with co-processor code compilation. The first stage generates virtual assembly code from the source code. The second stage generates co-processor machine code from the virtual assembly. Both the virtual assembly and co-processor machine code may be included within the co-processor enabled application program. A co-processor driver uses a description of the currently available co-processor to select between virtual assembly and co-processor machine code. If the virtual assembly code is selected, then the co-processor driver compiles the virtual assembly into machine code for the current co-processor.Type: GrantFiled: November 12, 2007Date of Patent: September 25, 2012Assignee: NVIDIA CorporationInventors: Julius Vanderspek, Nicholas Patrick Wilt, Jayant Kolhe, Ian A. Buck, Bastiaan Aarts
-
Patent number: 8271763Abstract: One embodiment of the present invention sets forth a technique for unifying the addressing of multiple distinct parallel memory spaces into a single address space for a thread. A unified memory space address is converted into an address that accesses one of the parallel memory spaces for that thread. A single type of load or store instruction may be used that specifies the unified memory space address for a thread instead of using a different type of load or store instruction to access each of the distinct parallel memory spaces.Type: GrantFiled: September 25, 2009Date of Patent: September 18, 2012Assignee: NVIDIA CorporationInventors: John R. Nickolls, Brett W. Coon, Ian A. Buck, Robert Steven Glanville
-
Patent number: 8261234Abstract: A system, method, and computer program product are provided for compiling code adapted to execute utilizing a first processor, for executing the code utilizing a second processor. In operation, code adapted to execute utilizing a first processor is identified. Additionally, the code is compiled for executing the code utilizing a second processor that is different from the first processor and includes a central processing unit. Further, the code is executed utilizing the second processor.Type: GrantFiled: February 15, 2008Date of Patent: September 4, 2012Assignee: NVIDIA CorporationInventors: Bastiaan J. M. Aarts, Ian A. Buck
-
Publication number: 20120066668Abstract: A general-purpose programming environment allows users to program a GPU as a general-purpose computation engine using familiar C/C++ programming constructs. Users may use declaration specifiers to identify which portions of a program are to be compiled for a CPU or a GPU. Specifically, functions, objects and variables may be specified for GPU binary compilation using declaration specifiers. A compiler separates the GPU binary code and the CPU binary code in a source file using the declaration specifiers. The location of objects and variables in different memory locations in the system may be identified using the declaration specifiers. CTA threading information is also provided for the GPU to support parallel processing.Type: ApplicationFiled: July 11, 2011Publication date: March 15, 2012Applicant: NVIDIA CorporationInventors: Ian Buck, Bastiaan Aarts
-
Publication number: 20110078406Abstract: One embodiment of the present invention sets forth a technique for unifying the addressing of multiple distinct parallel memory spaces into a single address space for a thread. A unified memory space address is converted into an address that accesses one of the parallel memory spaces for that thread. A single type of load or store instruction may be used that specifies the unified memory space address for a thread instead of using a different type of load or store instruction to access each of the distinct parallel memory spaces.Type: ApplicationFiled: September 25, 2009Publication date: March 31, 2011Inventors: John R. Nickolls, Brett W. Coon, Ian A. Buck, Robert Steven Glanville