Patents by Inventor Steven C. Miller
Steven C. Miller has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240399154Abstract: Devices, systems, and techniques are disclosed for planning, updating, and delivering electric field therapy. In one example, a system comprises processing circuitry configured to receive a request to deliver alternating electric field (AEF) therapy and determine therapy parameter values that define the AEF therapy, wherein the AEF therapy comprises delivery of a first electric field and a second electric field. The processing circuitry may also be configured to control an implantable medical device to deliver the first electric field from a first electrode combination of implanted electrodes and control the implantable medical device to deliver, alternating with the first electric field, the second electric field from a second electrode combination of implanted electrodes different than the first electrode combination.Type: ApplicationFiled: September 8, 2022Publication date: December 5, 2024Inventors: Benjamin Kevin Hendrick, Steven M. Goetz, David A. Simon, Maneesh Shrivastav, Leslie Hiemenz Holton, Xuan K. Wei, David J. Miller, Ryan B. Sefkow, Phillip C. Falkner, Meredith S. Seaborn, Richard T. Stone, Robert L. Olson, Scott D DeFoe
-
Publication number: 20240394195Abstract: A dynamic random access memory (DRAM) device includes functions configured to aid with operating the DRAM device as part of data caching functions. The DRAM is configured to respond to at least two types of commands. A first type of command (cache data access command) seeks to access a cache line of data, if present in the DRAM cache. A second type of command (cache probe command) seeks to determine whether a cache line of data is present, but is not requesting the data be returned in response. In response to these types of access commands, the DRAM device is configured to receive cache tag query values and to compare stored cache tag values with the cache tag query values. A hit/miss (HM) interface/bus may indicate the result of the cache tag compare and stored cache line status bits to a controller.Type: ApplicationFiled: May 15, 2024Publication date: November 28, 2024Inventors: Steven C. WOO, Michael Raymond MILLER, Taeksang SONG, Wendy ELSASSER, Maryam BABAIE
-
Publication number: 20240354014Abstract: A memory system includes two or more memory controllers capable of accessing the same dynamic, random-access memory (DRAM), one controller having access to the DRAM or a subset of the DRAM at a time. Different subsets of the DRAM are supported with different refresh-control circuitry, including respective refresh-address counters. Whichever controller has access to a given subset of the DRAM issues refresh requests to the corresponding refresh-address counter. Counters are synchronized before control of a given subset of the DRAM is transferred between controllers to avoid a loss of stored data.Type: ApplicationFiled: May 6, 2024Publication date: October 24, 2024Inventors: Thomas Vogelsang, Steven C. Woo, Michael Raymond Miller
-
Patent number: 12118240Abstract: An embodiment of an electronic apparatus may include one or more substrates, and logic coupled to the one or more substrates, the logic to maintain a respective lookup table for each of two or more persistent storage devices in a persistent memory outside of the two or more persistent storage devices with a first indirection granularity that is smaller than a second indirection granularity of each of the two or more persistent storage devices, buffer write requests to the two or more persistent storage devices in the persistent memory in accordance with the respective lookup tables, and perform a sequential write from the persistent memory to a particular device of the two or more persistent storage devices when a portion of the buffer that corresponds to the particular device has an amount of data to write that corresponds to the second indirection granularity. Other embodiments are disclosed and claimed.Type: GrantFiled: August 7, 2020Date of Patent: October 15, 2024Assignee: Intel CorporationInventors: Benjamin Walker, Sanjeev Trika, Kapil Karkra, James R. Harris, Steven C. Miller, Bishwajit Dutta
-
Patent number: 12110245Abstract: A method for treating wastewater is disclosed. The method is useful in particular for treating wastewater that is generated from the process of drilling, hydraulic fracturing and/or cleaning a bore of an oil or natural gas well bore. The method may include performing cold lime softening of the wastewater to form waste salt flocs, filtration of waste salt flocs, ozonation of the filtrate from the filtration, and reverse osmosis of the filtrate to produce a purified permeate.Type: GrantFiled: November 15, 2018Date of Patent: October 8, 2024Assignee: Eau Midstream, Inc.Inventors: Francis C. Miller, Steven B. Addleman
-
Publication number: 20240311301Abstract: A dynamic random access memory (DRAM) device includes functions configured to aid with operating the DRAM device as part of data caching functions. In response to some write and/or read access commands, the DRAM device is configured to copy a cache line (e.g., dirty cache line) from the main DRAM memory array, place it in a flush buffer, and replace the copied cache line in the main DRAM memory array with a new (e.g., different) cache line of data. In response to conditions and/or events (e.g., explicit command, refresh, write-to-read command sequence, unused data bus bandwidth, full flush buffer, etc.) the DRAM device transmits the cache line from the flush buffer to the controller. The controller may then transmit the cache line to other cache levels.Type: ApplicationFiled: March 7, 2024Publication date: September 19, 2024Inventors: Michael Raymond MILLER, Steven C. Woo, Wendy Elsasser, Taeksang Song
-
Publication number: 20240311334Abstract: A stacked processor-plus-memory device includes a processing die with an array of processing elements of an artificial neural network. Each processing element multiplies a first operand—e.g. a weight—by a second operand to produce a partial result to a subsequent processing element. To prepare for these computations, a sequencer loads the weights into the processing elements as a sequence of operands that step through the processing elements, each operand stored in the corresponding processing element. The operands can be sequenced directly from memory to the processing elements or can be stored first in cache. The processing elements include streaming logic that disregards interruptions in the stream of operands.Type: ApplicationFiled: April 2, 2024Publication date: September 19, 2024Inventors: Steven C. Woo, Michael Raymond Miller
-
Patent number: 12086441Abstract: An interconnected stack of one or more Dynamic Random Access Memory (DRAM) die also has one or more custom logic, controller, or processor die. The custom die(s) of the stack include direct channel interfaces that allow direct access to memory regions on one or more DRAMs in the stack. The direct channels are time-division multiplexed such that each DRAM die is associated with a time slot on a direct channel. The custom die configures a first DRAM die to read a block of data and transmit it via the direct channel using a time slot that is assigned to a second DRAM die. The custom die also configures the second memory device to receive the first block of data in its assigned time slot and write the block of data.Type: GrantFiled: August 30, 2021Date of Patent: September 10, 2024Assignee: Rambus Inc.Inventors: Michael Raymond Miller, Steven C. Woo, Thomas Vogelsang
-
Publication number: 20240295961Abstract: An integrated circuit (IC) memory device includes an array of storage cells configured into multiple banks. Interface circuitry receives refresh commands from a host memory controller to refresh the multiple banks for a first refresh mode. On-die refresh control circuitry selectively generates local refresh commands to refresh the multiple banks in cooperation with the host memory controller during a designated hidden refresh interval in a second refresh mode. Mode register circuitry stores a value indicating whether the on-die refresh control circuitry is enabled for use during the second refresh mode. The interface circuitry includes backchannel control circuitry to transmit a corrective action control signal during operation in the second refresh mode.Type: ApplicationFiled: March 7, 2024Publication date: September 5, 2024Inventors: Michael Raymond Miller, Steven C. Woo, Thomas Vogelsang
-
Publication number: 20240285199Abstract: A continuous glucose monitoring system may utilize externally sourced information regarding the physiological state and ambient environment of its user for externally calibrating sensor glucose measurements. Externally sourced factory calibration information may be utilized, where the information is generated by comparing metrics obtained from the data used to generate the sensor's glucose sensing algorithm to similar data obtained from each batch of sensors to be used with the algorithm in the future. The output sensor glucose value of a glucose sensor may also be estimated by analytically optimizing input sensor signals to accurately correct for changes in sensitivity, run-in time, glucose current dips, and other variable sensor wear effects.Type: ApplicationFiled: April 23, 2024Publication date: August 29, 2024Inventors: Keith Nogueira, Peter Ajemba, Michael E. Miller, Steven C. Jacks, Jeffrey Nishida, Andy Y. Tsai, Andrea Varsavsky
-
Publication number: 20240241670Abstract: An interconnected stack of one or more Dynamic Random Access Memory (DRAM) die has a base logic die and one or more custom logic or processor die. The processor logic die snoops commands sent to and through the stack. In particular, the processor logic die may snoop mode setting commands (e.g., mode register set—MRS commands). At least one mode setting command that is ignored by the DRAM in the stack is used to communicate a command to the processor logic die. In response the processor logic die may prevent commands, addresses, and data from reaching the DRAM die(s). This enables the processor logic die to send commands/addresses and communicate data with the DRAM die(s). While being able to send commands/addresses and communicate data with the DRAM die(s), the processor logic die may execute software using the DRAM die(s) for program and/or data storage and retrieval.Type: ApplicationFiled: January 30, 2024Publication date: July 18, 2024Inventors: Thomas VOGELSANG, Michael Raymond MILLER, Steven C. WOO
-
Publication number: 20230180421Abstract: An apparatus is described. The apparatus includes a rack shelf back interface. The rack shelf back interface has first and second flanges to mount to a rack’s rear facing rack mounts. The rack shelf back interface has a power connector610 on an inside of a back face of the rack shelf interface, the power connector to mate with a corresponding power connector on an electronic system that is to be installed in the rack shelf. The rack shelf back interface has an alignment feature on the inside of the back face of the rack shelf interface to ensure that the power connector properly mates with the corresponding power connector.Type: ApplicationFiled: December 2, 2021Publication date: June 8, 2023Inventors: Carl D. WILLIAMS, Steven C. MILLER
-
Publication number: 20220245522Abstract: Methods and apparatus for employing selective compression for addressing congestion control for Artificial Intelligence (AI) workloads. Multiple interconnected compute nodes are used for performing an AI workload in a distributed environment, such as training an AI model. Periodically, such as following an epoch for processing batches of training data in parallel, the compute nodes exchange Tensor data (e.g., local model gradients) with one another, which may lead to network/fabric congestion. Compute nodes and/or switches in the distributed environment are configured to detect current or projected network/fabric congestion and to selectively apply variable rate compression to packets containing the Tensor data to alleviate/avoid the congestion. Tensor data may be selectively applied at source compute nodes by computing a network pause time and comparing that time to a compression compute time.Type: ApplicationFiled: April 18, 2022Publication date: August 4, 2022Inventors: Aswin RAMACHANDRAN, Amedeo SAPIO, Steven C. MILLER
-
Publication number: 20220173015Abstract: An apparatus is described. The apparatus includes a cold plate. The cold plate includes an input port to receive cooled fluid. The cold plate includes an ingress manifold to feed the cooled fluid to different regions, where, each of the different regions are to be located above its own respective semiconductor chip package. The cold plate includes an egress manifold to collect warmed fluid from the different regions. The cold plate includes an output port to emit the warmed fluid from the cold plate.Type: ApplicationFiled: February 18, 2022Publication date: June 2, 2022Inventors: Prabhakar SUBRAHMANYAM, Jack D. MUMBO, Carl D. WILLIAMS, Steven C. MILLER
-
Publication number: 20200363998Abstract: An embodiment of an electronic apparatus may include one or more substrates, and logic coupled to the one or more substrates, the logic to maintain a respective lookup table for each of two or more persistent storage devices in a persistent memory outside of the two or more persistent storage devices with a first indirection granularity that is smaller than a second indirection granularity of each of the two or more persistent storage devices, buffer write requests to the two or more persistent storage devices in the persistent memory in accordance with the respective lookup tables, and perform a sequential write from the persistent memory to a particular device of the two or more persistent storage devices when a portion of the buffer that corresponds to the particular device has an amount of data to write that corresponds to the second indirection granularity. Other embodiments are disclosed and claimed.Type: ApplicationFiled: August 7, 2020Publication date: November 19, 2020Applicant: Intel CorporationInventors: Benjamin Walker, Sanjeev Trika, Kapil Karkra, James R. Harris, Steven C. Miller, Bishwajit Dutta
-
Patent number: 10761779Abstract: Techniques enable offloading operations to be performed closer to where the data is stored in systems with sharded and erasure-coded data, such as in data centers. In one example, a system includes a compute sled or compute node, which includes one or more processors. The system also includes a storage sled or storage node. The storage node includes one or more storage devices. The storage node stores at least one portion of data that is sharded and erasure-coded. Other portions of the data are stored on other storage nodes. The compute node sends a request to offload an operation to the storage node to access the sharded and erasure-coded data. The storage node then sends a request to offload the operation to one or more other storage nodes determined to store one or more codes of the data. The storage nodes perform the operation on the portions of locally stored data and provide the results to the next-level up node.Type: GrantFiled: December 5, 2018Date of Patent: September 1, 2020Assignee: Intel CorporationInventors: Sanjeev N. Trika, Steven C. Miller
-
Patent number: 10542333Abstract: Technologies for a low-latency interface with data storage of a storage sled in a data center are disclosed. In the illustrative embodiment, a storage sled stores metadata including the location of data in a storage device in low-latency non-volatile memory. When accessing data, the storage sled may access the metadata on the low-latency non-volatile memory and then, based on the location determined by the access to the metadata, access the location of the data in the storage device. Such an approach results in only one access to the data storage in order to read the data instead of two.Type: GrantFiled: December 30, 2016Date of Patent: January 21, 2020Assignee: Intel CorporationInventor: Steven C. Miller
-
Patent number: 10411729Abstract: Technologies for allocating ephemeral data storage among managed nodes include an orchestrator server to receive ephemeral data storage availability information from the managed nodes, receive a request from a first managed node of the managed nodes to allocate an amount of ephemeral data storage as the first managed node executes one or more workloads, determine, as a function of the ephemeral data storage availability information, an availability of the requested amount of ephemeral data storage, and allocate, in response to a determination that the requested amount of ephemeral data storage is available from one or more other managed nodes, the requested amount of ephemeral data storage to the first managed node as the first managed node executes the one or more workloads. Other embodiments are also described and claimed.Type: GrantFiled: December 30, 2016Date of Patent: September 10, 2019Assignee: Intel CorporationInventors: Steven C. Miller, David B. Minturn
-
Patent number: 10334334Abstract: Examples may include a sled for a rack of a data center including physical storage resources. The sled comprises an array of storage devices and an array of memory. The storage devices and memory are directly coupled to storage resource processing circuits which are themselves, directly coupled to dual-mode optical network interface circuitry. The circuitry can store data on the storage devices and metadata associated with the data on non-volatile memory in the memory array.Type: GrantFiled: December 29, 2016Date of Patent: June 25, 2019Assignee: INTEL CORPORATIONInventors: Steven C. Miller, Michael Crocker, Aaron Gorius, Paul Dormitzer
-
Patent number: 10313769Abstract: Technologies for managing partially synchronized writes include a managed node. The managed node is to issue a write request to write a data block, on behalf of a workload, to multiple data storage devices connected to a network, pause execution of the workload, receive an initial acknowledgment associated with one of the multiple data storage devices, wherein the initial acknowledgement is indicative of successful storage of the data block, and resume execution of the workload after receipt of the initial acknowledgement and before receipt of subsequent acknowledgements associated with any of the other data storage devices. Other embodiments are also described and claimed.Type: GrantFiled: December 30, 2016Date of Patent: June 4, 2019Assignee: Intel CorporationInventor: Steven C. Miller