Patents by Inventor Gary Lauterbach
Gary Lauterbach has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11819372Abstract: An electric toothbrush is disclosed, having a handle including a drive motor and a head portion removably coupled to the handle. The head portion includes a brush, an exit port fed by an internal irrigation lumen, and a drive shaft operatively connecting the drive motor to the brush. A disposable irrigant reservoir is removably coupled directly to the head portion. The irrigant reservoir is in fluid communication with the internal irrigation lumen of the head portion. The head portion is configured to be replaceable with respect to the handle, and the irrigant reservoir is configured to be replaceable independent of the head portion. In some examples, the irrigant reservoir is a disposable bulb (e.g., a capsule), and is positioned such that the user can hold the handle while also squeezing the reservoir with the same hand. In some examples, the toothbrush head includes a suction mechanism.Type: GrantFiled: July 7, 2022Date of Patent: November 21, 2023Assignee: OralKleen, LLCInventors: Kurt Kenzler, Virginia Prendergast, Mike Winterhalter, Barry Jennings, Matthew Vergin, Gary Lauterbach
-
Publication number: 20230240822Abstract: A toothbrush system configured to be coupled to an external suction source. The toothbrush system includes a shaft with a main body with a first end and a second end, and a bristle array and a suction port arranged at the second end. The bristle array and the suction port both extend away from the shaft so that an opening of the suction port is proximal to a distal end of the bristle array. The shaft also includes a stem arranged at the first end. The stem includes a passageway that extends through the main body to provide a fluid connection to the suction port, and is configured to be coupled to the external suction source. Also, the handle is removably coupled to the shaft and includes a drive system operably coupled to the bristle array to actuate motion of the bristle array.Type: ApplicationFiled: April 11, 2023Publication date: August 3, 2023Inventors: Virginia PRENDERGAST, Cynthia KLEIMAN, Charles LEWIS, Gary LAUTERBACH, Scott CASTANON, Dylann CERIANI, Michael WILLIAMS
-
Patent number: 11622841Abstract: A toothbrush system configured to be coupled to an external suction source. The toothbrush system includes a shaft with a main body with a first end and a second end, and a bristle array and a suction port arranged at the second end. The bristle array and the suction port both extend away from the shaft so that an opening of the suction port is proximal to a distal end of the bristle array. The shaft also includes a stem arranged at the first end. The stem includes a passageway that extends through the main body to provide a fluid connection to the suction port, and is configured to be coupled to the external suction source. Also, the handle is removably coupled to the shaft and includes a drive system operably coupled to the bristle array to actuate motion of the bristle array.Type: GrantFiled: July 9, 2021Date of Patent: April 11, 2023Assignee: Dignity HealthInventors: Virginia Prendergast, Cynthia Kleiman, Charles Lewis, Gary Lauterbach, Scott Castanon, Dylann Ceriani, Michael Williams
-
Publication number: 20220226088Abstract: A tooth cleaning system includes a handheld toothbrush portion coupled by tubes to an arm-mounted module. Collectively, the handheld toothbrush portion and the arm-mounted module include an oscillating toothbrush head; an irrigation system having a pump, reservoir, and tubing; and a scavenge system having a pump, reservoir, and tubing. In some examples, portions of the irrigation system are disposed in the handheld toothbrush and portions of the irrigation system are disposed in the arm-mounted module.Type: ApplicationFiled: January 20, 2021Publication date: July 21, 2022Inventors: Gary LAUTERBACH, Kurt KENZLER, Robert BRADY, Matthew VERGIN, Barry JENNINGS
-
Publication number: 20210330432Abstract: A toothbrush system configured to be coupled to an external suction source. The toothbrush system includes a shaft with a main body with a first end and a second end, and a bristle array and a suction port arranged at the second end. The bristle array and the suction port both extend away from the shaft so that an opening of the suction port is proximal to a distal end of the bristle array. The shaft also includes a stem arranged at the first end. The stem includes a passageway that extends through the main body to provide a fluid connection to the suction port, and is configured to be coupled to the external suction source. Also, the handle is removably coupled to the shaft and includes a drive system operably coupled to the bristle array to actuate motion of the bristle array.Type: ApplicationFiled: July 9, 2021Publication date: October 28, 2021Inventors: Virginia Prendergast, Cynthia Kleiman, Charles Lewis, Gary Lauterbach, Scott Castanon, Dylann Ceriani, Michael Williams
-
Publication number: 20140188996Abstract: A server system allows system's nodes to access a fabric interconnect of the server system directly, rather than via an interface that virtualizes the fabric interconnect as a network or storage interface. The server system also employs controllers to provide an interface to the fabric interconnect via a standard protocol, such as a network protocol or a storage protocol. The server system thus facilitates efficient and flexible transfer of data between the server system's nodes.Type: ApplicationFiled: December 31, 2012Publication date: July 3, 2014Applicant: Advanced Micro Devices, Inc.Inventors: Sean Lie, Gary Lauterbach
-
Patent number: 8140719Abstract: A data center has several dis-aggregated data clusters that connect to the Internet through a firewall and load-balancer. Each dis-aggregated data cluster has several dis-aggregated compute/switch/disk chassis that are connected together by a mesh of Ethernet links. Each dis-aggregated compute/switch/disk chassis has many processing nodes, disk nodes, and I/O nodes on node cards that are inserted into the chassis. These node cards are connected together by a direct interconnect fabric. Using the direct interconnect fabric, remote I/O and disk nodes appear to the operating system to be located on the local processor's own peripheral bus. A virtual Ethernet controller and a virtual generic peripheral act as virtual endpoints for the local processor's peripheral bus. I/O and disk node peripherals are virtualized by hardware without software drivers. Rack and aggregation Ethernet switches are eliminated using the direct interconnect fabric, which provides a flatter, dis-aggregated hierarchy.Type: GrantFiled: May 5, 2009Date of Patent: March 20, 2012Assignee: Sea Micro, Inc.Inventors: Gary Lauterbach, Anil R. Rao
-
Patent number: 8006042Abstract: A system and method for increasing the throughput of a processor during cache misses. During the retrieval of the cache miss data, subsequent memory requests are generated and allowed to proceed to the cache. The data for the subsequent cache hits are stored in a bypass retry device. Also, the cache miss address and memory line data may be stored by the device when they are retrieved and they may be sent them to the cache for a cache line replacement. The bypass retry device determines the priority of sending data to the processor. The priority allows the data for memory requests to be provided to the processor in the same order as they were generated from the processor without delaying subsequent memory requests after a cache miss.Type: GrantFiled: November 26, 2007Date of Patent: August 23, 2011Assignee: GLOBALFOUNDRIES Inc.Inventor: Gary Lauterbach
-
Patent number: 7925802Abstract: A multi-computer system has many processors that share peripherals. The peripherals are virtualized by hardware without software drivers. Remote peripherals appear to the operating system to be located on the local processor's own peripheral bus. A processor, DRAM, and north bridge connect to a south bridge interconnect fabric chip that has a virtual Ethernet controller and a virtual generic peripheral that act as virtual endpoints for the local processor's peripheral bus. Requests received by the virtual endpoints are encapsulated in interconnect packets and sent over an interconnect fabric to a device manager that accesses remote peripherals on a shared remote peripheral bus so that data can be returned. Ethernet Network Interface Cards (NIC), hard disks, consoles, and BIOS are remote peripherals that can be virtualized. Processors can boot entirely from the remote BIOS without additional drivers or a local BIOS. Peripheral costs are reduced by sharing remote peripherals.Type: GrantFiled: June 10, 2008Date of Patent: April 12, 2011Assignee: SeaMicro Corp.Inventors: Gary Lauterbach, Anil Rao
-
Patent number: 7890702Abstract: A computer system and method. In one embodiment, a computer system comprises a processor and a cache memory. The processor executes a prefetch instruction to prefetch a block of data words into the cache memory. In one embodiment, the cache memory comprises a plurality of cache levels. The processor selects one of the cache levels based on a value of a prefetch instruction parameter indicating the temporal locality of data to be prefetched. In a further embodiment, individual words are prefetched from non-contiguous memory addresses. A single execution of the prefetch instruction allows the processor to prefetch multiple blocks into the cache memory. The number of data words in each block, the number of blocks, an address interval between each data word of each block, and an address interval between each block to be prefetched are indicated by parameters of the prefetch instruction.Type: GrantFiled: November 26, 2007Date of Patent: February 15, 2011Assignee: Advanced Micro Devices, Inc.Inventor: Gary Lauterbach
-
Patent number: 7877559Abstract: A processor includes at least one processing core. The processing core includes a memory cache, a store queue, and a post-retirement store queue. The processing core retires a store in the store queue and conveys the store to the memory cache and the post-retirement store queue, in response to retiring the store. In one embodiment, the store queue and/or the post-retirement store queue is a first-in, first-out queue. In a further embodiment, to convey the store to the memory cache, the processing core obtains exclusive access to a portion of the memory cache targeted by the store. The processing core buffers the store in a coalescing buffer and merges with the store, one or more additional stores and/or loads targeted to the portion of the memory cache targeted by the store prior to writing the store to the memory cache.Type: GrantFiled: November 26, 2007Date of Patent: January 25, 2011Assignee: Globalfoundries Inc.Inventor: Gary Lauterbach
-
Patent number: 7822951Abstract: A system and method for data forwarding from a store instruction to a load instruction during out-of-order execution, when the load instruction address matches against multiple older uncommitted store addresses or if the forwarding fails during the first pass due to any other reason. In a first pass, the youngest store instruction in program order of all store instructions older than a load instruction is found and an indication to the store buffer entry holding information of the youngest store instruction is recorded. In a second pass, the recorded indication is used to index the store buffer and the store bypass data is forwarded to the load instruction. Simultaneously, it is verified if no new store, younger than the previously identified store and older than the load has not been issued due to out-of-order execution.Type: GrantFiled: August 1, 2007Date of Patent: October 26, 2010Assignee: Advanced Micro Devices, Inc.Inventors: Krishnan Ramani, Gary Lauterbach
-
Patent number: 7734873Abstract: A processor includes a cache hierarchy including a level-1 cache and a higher-level cache. The processor maps a portion of physical memory space to a portion of the higher-level cache, executes instructions, at least some of which comprise microcode, allows microcode to access the portion of the higher-level cache, and prevents instructions that do not comprise microcode from accessing the portion of the higher-level cache. The first portion of the physical memory space can be permanently allocated for use by microcode. The processor can move one or more cache lines of the first portion of the higher-level cache from the higher-level cache to a first portion of the level-1 cache, allow microcode to access the first portion of the first level-1 cache, and prevent instructions that do not comprise microcode from accessing the first portion of the first level-1 cache.Type: GrantFiled: May 29, 2007Date of Patent: June 8, 2010Assignee: Advanced Micro Devices, Inc.Inventors: Gary Lauterbach, Bruce R. Holloway, Michael Gerard Butler, Sean Lic
-
Patent number: 7587581Abstract: A processor reduces wasted cycle time resulting from stalling and idling, and increases the proportion of execution time, by supporting and implementing both vertical multithreading and horizontal multithreading. Vertical multithreading permits overlapping or “hiding” of cache miss wait times. In vertical multithreading, multiple hardware threads share the same processor pipeline. A hardware thread is typically a process, a lightweight process, a native thread, or the like in an operating system that supports multithreading. Horizontal multithreading increases parallelism within the processor circuit structure, for example within a single integrated circuit die that makes up a single-chip processor. To further increase system parallelism in some processor embodiments, multiple processor cores are formed in a single die. Advances in on-chip multiprocessor horizontal threading are gained as processor core sizes are reduced through technological advancements.Type: GrantFiled: February 23, 2007Date of Patent: September 8, 2009Assignee: Sun Microsystems, Inc.Inventors: William N. Joy, Marc Tremblay, Gary Lauterbach, Joseph I. Chamdani
-
Publication number: 20090216920Abstract: A data center has several dis-aggregated data clusters that connect to the Internet through a firewall and load-balancer. Each dis-aggregated data cluster has several dis-aggregated compute/switch/disk chassis that are connected together by a mesh of Ethernet links. Each dis-aggregated compute/switch/disk chassis has many processing nodes, disk nodes, and I/O nodes on node cards that are inserted into the chassis. These node cards are connected together by a direct interconnect fabric. Using the direct interconnect fabric, remote I/O and disk nodes appear to the operating system to be located on the local processor's own peripheral bus. A virtual Ethernet controller and a virtual generic peripheral act as virtual endpoints for the local processor's peripheral bus. I/O and disk node peripherals are virtualized by hardware without software drivers. Rack and aggregation Ethernet switches are eliminated using the direct interconnect fabric, which provides a flatter, dis-aggregated hierarchy.Type: ApplicationFiled: May 5, 2009Publication date: August 27, 2009Applicant: SEAMICRO CORP.Inventors: Gary Lauterbach, Anil R. Rao
-
Publication number: 20090138662Abstract: A system and method for increasing the throughput of a processor during cache misses. During the retrieval of the cache miss data, subsequent memory requests are generated and allowed to proceed to the cache. The data for the subsequent cache hits are stored in a bypass retry device. Also, the cache miss address and memory line data may be stored by the device when they are retrieved and they may be sent them to the cache for a cache line replacement. The bypass retry device determines the priority of sending data to the processor. The priority allows the data for memory requests to be provided to the processor in the same order as they were generated from the processor without delaying subsequent memory requests after a cache miss.Type: ApplicationFiled: November 26, 2007Publication date: May 28, 2009Inventor: Gary Lauterbach
-
Publication number: 20090138661Abstract: A computer system and method. In one embodiment, a computer system comprises a processor and a cache memory. The processor executes a prefetch instruction to prefetch a block of data words into the cache memory. In one embodiment, the cache memory comprises a plurality of cache levels. The processor selects one of the cache levels based on a value of a prefetch instruction parameter indicating the temporal locality of data to be prefetched. In a further embodiment, individual words are prefetched from non-contiguous memory addresses. A single execution of the prefetch instruction allows the processor to prefetch multiple blocks into the cache memory. The number of data words in each block, the number of blocks, an address interval between each data word of each block, and an address interval between each block to be prefetched are indicated by parameters of the prefetch instruction.Type: ApplicationFiled: November 26, 2007Publication date: May 28, 2009Inventor: Gary Lauterbach
-
Publication number: 20090138659Abstract: A processor includes at least one processing core. The processing core includes a memory cache, a store queue, and a post-retirement store queue. The processing core retires a store in the store queue and conveys the store to the memory cache and the post-retirement store queue, in response to retiring the store. In one embodiment, the store queue and/or the post-retirement store queue is a first-in, first-out queue. In a further embodiment, to convey the store to the memory cache, the processing core obtains exclusive access to a portion of the memory cache targeted by the store. The processing core buffers the store in a coalescing buffer and merges with the store, one or more additional stores and/or loads targeted to the portion of the memory cache targeted by the store prior to writing the store to the memory cache.Type: ApplicationFiled: November 26, 2007Publication date: May 28, 2009Inventor: Gary Lauterbach
-
Publication number: 20090037697Abstract: A system and method for data forwarding from a store instruction to a load instruction during out-of-order execution, when the load instruction address matches against multiple older uncommitted store addresses or if the forwarding fails during the first pass due to any other reason. In a first pass, the youngest store instruction in program order of all store instructions older than a load instruction is found and an indication to the store buffer entry holding information of the youngest store instruction is recorded. In a second pass, the recorded indication is used to index the store buffer and the store bypass data is forwarded to the load instruction. Simultaneously, it is verified if no new store, younger than the previously identified store and older than the load has not been issued due to out-of-order execution.Type: ApplicationFiled: August 1, 2007Publication date: February 5, 2009Inventors: Krishnan Ramani, Gary Lauterbach
-
Publication number: 20080320181Abstract: A multi-computer system has many processors that share peripherals. The peripherals are virtualized by hardware without software drivers. Remote peripherals appear to the operating system to be located on the local processor's own peripheral bus. A processor, DRAM, and north bridge connect to a south bridge interconnect fabric chip that has a virtual Ethernet controller and a virtual generic peripheral that act as virtual endpoints for the local processor's peripheral bus. Requests received by the virtual endpoints are encapsulated in interconnect packets and sent over an interconnect fabric to a device manager that accesses remote peripherals on a shared remote peripheral bus so that data can be returned. Ethernet Network Interface Cards (NIC), hard disks, consoles, and BIOS are remote peripherals that can be virtualized. Processors can boot entirely from the remote BIOS without additional drivers or a local BIOS. Peripheral costs are reduced by sharing remote peripherals.Type: ApplicationFiled: June 10, 2008Publication date: December 25, 2008Applicant: SEAMICRO CORP.Inventors: Gary Lauterbach, Anil Rao