Patents by Inventor Craig Ross
Craig Ross has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12278042Abstract: A transformer apparatus for an electrical power transformation system is provided. The transformer apparatus comprises three outer transformer limbs, an inner transformer limb a transfer star, and first and second connection portions. The transfer star comprises an electromagnetic transfer core and three transfer coils. The electromagnetic transfer core extends from the inner transformer limb to each of the three outer transformer limbs at a point on each outer transformer limb between the first coil assembly and the second coil assembly. The transfer coils are wound around the electromagnetic transfer core such that each transfer coil is arranged between the inner transformer limb and a respective outer transformer limb. The transfer star is configured to allow transfer of magnetomotive force between the outer transformer limbs and the inner transformer limb of the transformer apparatus. First and second connecting portions are to allow magnetic flux to flow between the inner and outer transformer limbs.Type: GrantFiled: September 11, 2020Date of Patent: April 15, 2025Assignee: IONATE LIMITEDInventors: Craig Ross, Emilia Apostol, Matthew Williams
-
Publication number: 20220301767Abstract: A transformer apparatus for an electrical power transformation system is provided. The transformer apparatus comprises three outer transformer limbs, an inner transformer limb a transfer star, and first and second connection portions. The transfer star comprises an electromagnetic transfer core and three transfer coils. The electromagnetic transfer core extends from the inner transformer limb to each of the three outer transformer limbs at a point on each outer transformer limb between the first coil assembly and the second coil assembly. The transfer coils are wound around the electromagnetic transfer core such that each transfer coil is arranged between the inner transformer limb and a respective outer transformer limb. The transfer star is configured to allow transfer of magnetomotive force between the outer transformer limbs and the inner transformer limb of the transformer apparatus. First and second connecting portions are to allow magnetic flux to flow between the inner and outer transformer limbs.Type: ApplicationFiled: September 11, 2020Publication date: September 22, 2022Inventors: Craig ROSS, Emilia APOSTOL, Matthew WILLIAMS
-
Patent number: 9612994Abstract: Systems and devices configured to implement techniques for ensuring the completion of transactions while minimizing latency and power consumption are described. A device may be operably coupled to a bidirectional communications bus. A bidirectional communications bus may include a clock line and a data line. The device may be configured to determine if an initiated transaction corresponds to a device in a low power state. The device may pause the transaction. The device may replay portions of the transaction when the device is in an appropriate power state. The device may replay portions of the transaction using an override interface.Type: GrantFiled: September 18, 2013Date of Patent: April 4, 2017Assignee: NVIDIA CorporationInventors: Kevin Wong, Craig Ross, Thomas Dewey
-
Patent number: 9489245Abstract: One embodiment of the present invention enables threads executing on a processor to locally generate and execute work within that processor by way of work queues and command blocks. A device driver, as an initialization procedure for establishing memory objects that enable the threads to locally generate and execute work, generates a work queue, and sets a GP_GET pointer of the work queue to the first entry in the work queue. The device driver also, during the initialization procedure, sets a GP_PUT pointer of the work queue to the last free entry included in the work queue, thereby establishing a range of entries in the work queue into which new work generated by the threads can be loaded and subsequently executed by the processor. The threads then populate command blocks with generated work and point entries in the work queue to the command blocks to effect processor execution of the work stored in the command blocks.Type: GrantFiled: October 26, 2012Date of Patent: November 8, 2016Assignee: NVIDIA CorporationInventors: Ignacio Llamas, Craig Ross Duttweiler, Jeffrey A. Bolz, Daniel Elliot Wexler
-
Patent number: 9268601Abstract: One embodiment of the present invention sets forth a technique for launching work on a processor. The method includes the steps of initializing a first state object within a memory region accessible to a program executing on the processor, populating the first state object with data associated with a first workload that is generated by the program, and triggering the processing of the first workload on the processor according to the data within the first state object.Type: GrantFiled: March 31, 2011Date of Patent: February 23, 2016Assignee: NVIDIA CorporationInventors: Timothy Paul Lottes Farrar, Ignacio Llamas, Daniel Elliot Wexler, Craig Ross Duttweiler
-
Patent number: 9213379Abstract: A device for processing graphics data may include a plurality of graphics processing units. The device may include a fan to dissipate thermal energy generated during the operation of the plurality of graphics processing units. Each of the plurality of graphics processing units may generate a pulse width modulated signal to control the speed of the fan. The device may include one or more monitoring units configured to monitor a signal controlling the speed of the fan. One or more of the plurality of pulse width modulated signals may be adjusted based on the monitored signal. One or more of the plurality of pulse width modulated signals may be adjusted such that a signal controlling the fan maintains a desired duty cycle.Type: GrantFiled: October 17, 2013Date of Patent: December 15, 2015Assignee: NVIDIA CORPORATIONInventors: Kevin Wong, Thomas Dewey, Craig Ross, Andrew Bell, John Lam, Gabriele Gorla
-
Patent number: 9135081Abstract: One embodiment of the present invention enables threads executing on a processor to locally generate and execute work within that processor by way of work queues and command blocks. A device driver, as an initialization procedure for establishing memory objects that enable the threads to locally generate and execute work, generates a work queue, and sets a GP_GET pointer of the work queue to the first entry in the work queue. The device driver also, during the initialization procedure, sets a GP_PUT pointer of the work queue to the last free entry included in the work queue, thereby establishing a range of entries in the work queue into which new work generated by the threads can be loaded and subsequently executed by the processor. The threads then populate command blocks with generated work and point entries in the work queue to the command blocks to effect processor execution of the work stored in the command blocks.Type: GrantFiled: October 26, 2012Date of Patent: September 15, 2015Assignee: NVIDIA CorporationInventors: Ignacio Llamas, Craig Ross Duttweiler, Jeffrey A. Bolz, Daniel Elliot Wexler
-
INTEGRATED CIRCUIT DETECTION CIRCUIT FOR A DIGITAL MULTI-LEVEL STRAP AND METHOD OF OPERATION THEREOF
Publication number: 20150219697Abstract: An integrated circuit (IC) based detection circuit for determining a strap value and a method of detecting a digital strap value. In one embodiment, the detection circuit includes: (1) a first receiver including transistors having first electrical characteristics that define a first threshold for the first receiver, the first receiver operable to generate a first binary digit based on an input signal and the first threshold and (2) a second receiver including transistors having second electrical characteristics that differ from the first electrical characteristics and define a second threshold for the second receiver that is lower than the first threshold, the second receiver operable to generate a second binary digit based on the input signal and the second threshold, the first and second binary digits indicating whether the strap value lies above the first threshold, between the first and second thresholds or below the second threshold.Type: ApplicationFiled: February 5, 2014Publication date: August 6, 2015Applicant: Nvidia CorporationInventors: Victor Chen, Jesse Max Guss, Craig Ross, Kevin Wong, Jason Kwok-san Lee -
Publication number: 20150108934Abstract: A device for processing graphics data may include a plurality of graphics processing units. The device may include a fan to dissipate thermal energy generated during the operation of the plurality of graphics processing units. Each of the plurality of graphics processing units may generate a pulse width modulated signal to control the speed of the fan. The device may include one or more monitoring units configured to monitor a signal controlling the speed of the fan. One or more of the plurality of pulse width modulated signals may be adjusted based on the monitored signal. One or more of the plurality of pulse width modulated signals may be adjusted such that a signal controlling the fan maintains a desired duty cycle.Type: ApplicationFiled: October 17, 2013Publication date: April 23, 2015Applicant: NVIDIA CorporationInventors: Kevin WONG, Thomas DEWEY, Craig ROSS, Andrew BELL, John LAM, Gabriele GORLA
-
Publication number: 20150081937Abstract: Systems and devices configured to implement techniques for ensuring the completion of transactions while minimizing latency and power consumption are described. A device may be operably coupled to a bidirectional communications bus. A bidirectional communications bus may include a clock line and a data line. The device may be configured to determine if an initiated transaction corresponds to a device in a low power state. The device may pause the transaction. The device may replay portions of the transaction when the device is in an appropriate power state. The device may replay portions of the transaction using an override interface.Type: ApplicationFiled: September 18, 2013Publication date: March 19, 2015Applicant: NVIDIA CorporationInventors: Kevin WONG, Craig ROSS, Thomas DEWEY
-
Publication number: 20140123144Abstract: One embodiment of the present invention enables threads executing on a processor to locally generate and execute work within that processor by way of work queues and command blocks. A device driver, as an initialization procedure for establishing memory objects that enable the threads to locally generate and execute work, generates a work queue, and sets a GP_GET pointer of the work queue to the first entry in the work queue. The device driver also, during the initialization procedure, sets a GP_PUT pointer of the work queue to the last free entry included in the work queue, thereby establishing a range of entries in the work queue into which new work generated by the threads can be loaded and subsequently executed by the processor. The threads then populate command blocks with generated work and point entries in the work queue to the command blocks to effect processor execution of the work stored in the command blocks.Type: ApplicationFiled: October 26, 2012Publication date: May 1, 2014Applicant: NVIDIA CORPORATIONInventors: Ignacio LLAMAS, Craig Ross DUTTWEILER, Jeffrey A. BOLZ, Daniel Elliot WEXLER
-
Publication number: 20140122838Abstract: One embodiment of the present invention enables threads executing on a processor to locally generate and execute work within that processor by way of work queues and command blocks. A device driver, as an initialization procedure for establishing memory objects that enable the threads to locally generate and execute work, generates a work queue, and sets a GP_GET pointer of the work queue to the first entry in the work queue. The device driver also, during the initialization procedure, sets a GP_PUT pointer of the work queue to the last free entry included in the work queue, thereby establishing a range of entries in the work queue into which new work generated by the threads can be loaded and subsequently executed by the processor. The threads then populate command blocks with generated work and point entries in the work queue to the command blocks to effect processor execution of the work stored in the command blocks.Type: ApplicationFiled: October 26, 2012Publication date: May 1, 2014Applicant: NVIDIA CORPORATIONInventors: Ignacio LLAMAS, Craig Ross DUTTWEILER, Jeffrey A. BOLZ, Daniel Elliot WEXLER
-
Publication number: 20110247018Abstract: One embodiment of the present invention sets forth a technique for launching work on a processor. The method includes the steps of initializing a first state object within a memory region accessible to a program executing on the processor, populating the first state object with data associated with a first workload that is generated by the program, and triggering the processing of the first workload on the processor according to the data within the first state object.Type: ApplicationFiled: March 31, 2011Publication date: October 6, 2011Inventors: Timothy Paul Lottes FARRAR, Ignacio Llamas, Daniel Elliot Wexler, Craig Ross Duttweiler
-
Publication number: 20070259749Abstract: A torque vectoring differential apparatus disposed in a vehicle powertrain includes an engine, transmission, and a torque vectoring differential. The torque vectoring differential includes a bevel gear differential assembly having a carrier input and side gear outputs. The side gear outputs are controlled to distribute torque from the torque and speed from the input member to the individual side gears through two speed control mechanisms each of which is controlled by respective individually selectively engageable torque-transmitting mechanisms. The differential assembly, and the speed control mechanisms are disposed in separate housings.Type: ApplicationFiled: May 3, 2006Publication date: November 8, 2007Inventor: Craig Ross
-
Publication number: 20070259751Abstract: A vehicle powertrain includes an engine, transmission, differential mechanism, and a gearing mechanism to control speed relationships. The gearing mechanism includes a plurality of intermeshing gears, two of which are selectively interconnectible by a torque-transmitting mechanism. The gear mechanism is arranged to control the speed differential between two members of the differential mechanism and therefore to control the speed relationship between the output gears or side gears of the differential mechanism.Type: ApplicationFiled: May 3, 2006Publication date: November 8, 2007Inventors: Craig Ross, Clinton Carey
-
Publication number: 20070084517Abstract: The present invention relates to a manifold for a vehicle transmission. The manifold defines a plurality of laterally extending channels suitable for simultaneously transferring oil at separate flow rates and pressure signals and may be subdivided to provide a plurality of like defined manifolds. The manifold, formed through an extrusion process, is fittable within or around a transmission shaft.Type: ApplicationFiled: September 23, 2005Publication date: April 19, 2007Inventors: Joel Maguire, Craig Ross