Patents by Inventor Steven C. Miller
Steven C. Miller has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12118240Abstract: An embodiment of an electronic apparatus may include one or more substrates, and logic coupled to the one or more substrates, the logic to maintain a respective lookup table for each of two or more persistent storage devices in a persistent memory outside of the two or more persistent storage devices with a first indirection granularity that is smaller than a second indirection granularity of each of the two or more persistent storage devices, buffer write requests to the two or more persistent storage devices in the persistent memory in accordance with the respective lookup tables, and perform a sequential write from the persistent memory to a particular device of the two or more persistent storage devices when a portion of the buffer that corresponds to the particular device has an amount of data to write that corresponds to the second indirection granularity. Other embodiments are disclosed and claimed.Type: GrantFiled: August 7, 2020Date of Patent: October 15, 2024Assignee: Intel CorporationInventors: Benjamin Walker, Sanjeev Trika, Kapil Karkra, James R. Harris, Steven C. Miller, Bishwajit Dutta
-
Publication number: 20230180421Abstract: An apparatus is described. The apparatus includes a rack shelf back interface. The rack shelf back interface has first and second flanges to mount to a rack’s rear facing rack mounts. The rack shelf back interface has a power connector610 on an inside of a back face of the rack shelf interface, the power connector to mate with a corresponding power connector on an electronic system that is to be installed in the rack shelf. The rack shelf back interface has an alignment feature on the inside of the back face of the rack shelf interface to ensure that the power connector properly mates with the corresponding power connector.Type: ApplicationFiled: December 2, 2021Publication date: June 8, 2023Inventors: Carl D. WILLIAMS, Steven C. MILLER
-
Publication number: 20220245522Abstract: Methods and apparatus for employing selective compression for addressing congestion control for Artificial Intelligence (AI) workloads. Multiple interconnected compute nodes are used for performing an AI workload in a distributed environment, such as training an AI model. Periodically, such as following an epoch for processing batches of training data in parallel, the compute nodes exchange Tensor data (e.g., local model gradients) with one another, which may lead to network/fabric congestion. Compute nodes and/or switches in the distributed environment are configured to detect current or projected network/fabric congestion and to selectively apply variable rate compression to packets containing the Tensor data to alleviate/avoid the congestion. Tensor data may be selectively applied at source compute nodes by computing a network pause time and comparing that time to a compression compute time.Type: ApplicationFiled: April 18, 2022Publication date: August 4, 2022Inventors: Aswin RAMACHANDRAN, Amedeo SAPIO, Steven C. MILLER
-
Publication number: 20220173015Abstract: An apparatus is described. The apparatus includes a cold plate. The cold plate includes an input port to receive cooled fluid. The cold plate includes an ingress manifold to feed the cooled fluid to different regions, where, each of the different regions are to be located above its own respective semiconductor chip package. The cold plate includes an egress manifold to collect warmed fluid from the different regions. The cold plate includes an output port to emit the warmed fluid from the cold plate.Type: ApplicationFiled: February 18, 2022Publication date: June 2, 2022Inventors: Prabhakar SUBRAHMANYAM, Jack D. MUMBO, Carl D. WILLIAMS, Steven C. MILLER
-
Publication number: 20200363998Abstract: An embodiment of an electronic apparatus may include one or more substrates, and logic coupled to the one or more substrates, the logic to maintain a respective lookup table for each of two or more persistent storage devices in a persistent memory outside of the two or more persistent storage devices with a first indirection granularity that is smaller than a second indirection granularity of each of the two or more persistent storage devices, buffer write requests to the two or more persistent storage devices in the persistent memory in accordance with the respective lookup tables, and perform a sequential write from the persistent memory to a particular device of the two or more persistent storage devices when a portion of the buffer that corresponds to the particular device has an amount of data to write that corresponds to the second indirection granularity. Other embodiments are disclosed and claimed.Type: ApplicationFiled: August 7, 2020Publication date: November 19, 2020Applicant: Intel CorporationInventors: Benjamin Walker, Sanjeev Trika, Kapil Karkra, James R. Harris, Steven C. Miller, Bishwajit Dutta
-
Patent number: 10761779Abstract: Techniques enable offloading operations to be performed closer to where the data is stored in systems with sharded and erasure-coded data, such as in data centers. In one example, a system includes a compute sled or compute node, which includes one or more processors. The system also includes a storage sled or storage node. The storage node includes one or more storage devices. The storage node stores at least one portion of data that is sharded and erasure-coded. Other portions of the data are stored on other storage nodes. The compute node sends a request to offload an operation to the storage node to access the sharded and erasure-coded data. The storage node then sends a request to offload the operation to one or more other storage nodes determined to store one or more codes of the data. The storage nodes perform the operation on the portions of locally stored data and provide the results to the next-level up node.Type: GrantFiled: December 5, 2018Date of Patent: September 1, 2020Assignee: Intel CorporationInventors: Sanjeev N. Trika, Steven C. Miller
-
Patent number: 10542333Abstract: Technologies for a low-latency interface with data storage of a storage sled in a data center are disclosed. In the illustrative embodiment, a storage sled stores metadata including the location of data in a storage device in low-latency non-volatile memory. When accessing data, the storage sled may access the metadata on the low-latency non-volatile memory and then, based on the location determined by the access to the metadata, access the location of the data in the storage device. Such an approach results in only one access to the data storage in order to read the data instead of two.Type: GrantFiled: December 30, 2016Date of Patent: January 21, 2020Assignee: Intel CorporationInventor: Steven C. Miller
-
Patent number: 10411729Abstract: Technologies for allocating ephemeral data storage among managed nodes include an orchestrator server to receive ephemeral data storage availability information from the managed nodes, receive a request from a first managed node of the managed nodes to allocate an amount of ephemeral data storage as the first managed node executes one or more workloads, determine, as a function of the ephemeral data storage availability information, an availability of the requested amount of ephemeral data storage, and allocate, in response to a determination that the requested amount of ephemeral data storage is available from one or more other managed nodes, the requested amount of ephemeral data storage to the first managed node as the first managed node executes the one or more workloads. Other embodiments are also described and claimed.Type: GrantFiled: December 30, 2016Date of Patent: September 10, 2019Assignee: Intel CorporationInventors: Steven C. Miller, David B. Minturn
-
Patent number: 10334334Abstract: Examples may include a sled for a rack of a data center including physical storage resources. The sled comprises an array of storage devices and an array of memory. The storage devices and memory are directly coupled to storage resource processing circuits which are themselves, directly coupled to dual-mode optical network interface circuitry. The circuitry can store data on the storage devices and metadata associated with the data on non-volatile memory in the memory array.Type: GrantFiled: December 29, 2016Date of Patent: June 25, 2019Assignee: INTEL CORPORATIONInventors: Steven C. Miller, Michael Crocker, Aaron Gorius, Paul Dormitzer
-
Patent number: 10313769Abstract: Technologies for managing partially synchronized writes include a managed node. The managed node is to issue a write request to write a data block, on behalf of a workload, to multiple data storage devices connected to a network, pause execution of the workload, receive an initial acknowledgment associated with one of the multiple data storage devices, wherein the initial acknowledgement is indicative of successful storage of the data block, and resume execution of the workload after receipt of the initial acknowledgement and before receipt of subsequent acknowledgements associated with any of the other data storage devices. Other embodiments are also described and claimed.Type: GrantFiled: December 30, 2016Date of Patent: June 4, 2019Assignee: Intel CorporationInventor: Steven C. Miller
-
Publication number: 20190114114Abstract: Techniques enable offloading operations to be performed closer to where the data is stored in systems with sharded and erasure-coded data, such as in data centers. In one example, a system includes a compute sled or compute node, which includes one or more processors. The system also includes a storage sled or storage node. The storage node includes one or more storage devices. The storage node stores at least one portion of data that is sharded and erasure-coded. Other portions of the data are stored on other storage nodes. The compute node sends a request to offload an operation to the storage node to access the sharded and erasure-coded data. The storage node then sends a request to offload the operation to one or more other storage nodes determined to store one or more codes of the data. The storage nodes perform the operation on the portions of locally stored data and provide the results to the next-level up node.Type: ApplicationFiled: December 5, 2018Publication date: April 18, 2019Inventors: Sanjeev N. TRIKA, Steven C. MILLER
-
Publication number: 20190042365Abstract: Examples include techniques for performing read-optimized lazy erasure encoding of data streams. An embodiment includes receiving a request to write a stream of data, separating the stream into a first plurality of extents, storing a primary replica and one or more additional replicas of each extent of the separated stream to a plurality of data storage nodes, and updating a list of extents to be erasure encoded. The embodiment further includes when an erasure encoded stripe can be created, getting the data for each of the extents of the erasure encoded stripe, calculating parity extents for unencoded extents of the erasure encoded stripe, writing the parity extents to a second plurality of data storage nodes, and deleting the one or more additional replicas of the extents of the erasure encoded stripe from the first plurality of data storage nodes.Type: ApplicationFiled: September 26, 2018Publication date: February 7, 2019Inventors: Kimberly A. MALONE, Steven C. MILLER
-
Patent number: 10091904Abstract: Examples may include a sled for a rack of a data center including physical storage resources. The sled comprises an array of storage devices and an array of memory. The storage devices and memory are directly coupled to storage resource processing circuits which are themselves, directly coupled to dual-mode optical network interface circuitry. The dual-mode optical network interface circuitry can have a bandwidth equal to or greater than the storage devices.Type: GrantFiled: December 29, 2016Date of Patent: October 2, 2018Assignee: INTEL CORPORATIONInventors: Steven C. Miller, Michael Crocker, Aaron Gorius, Paul Dormitzer
-
Patent number: 10034407Abstract: Examples may include a sled for a rack of a data center including physical storage resources. The sled comprises mounting flanges to enable robotic insertion and removal from a rack and storage device mounting slots to enable robotic insertion and removal of storage devices into the sled. The storage devices are coupled to an optical fabric through storage resource controllers and a dual-mode optical network interface.Type: GrantFiled: December 29, 2016Date of Patent: July 24, 2018Assignee: Intel CorporationInventors: Steven C. Miller, Michael Crocker, Aaron Gorius, Paul Dormitzer
-
Publication number: 20180024752Abstract: Technologies for low-latency compression in a data center are disclosed. In the illustrative embodiment, a storage sled compresses data with a low-latency compression algorithm prior to storing the data. The latency of the compression algorithm is less than the latency of the storage device, so that the latency of the storage and retrieval times are not significantly affected by the compression and decompression. In other embodiments, a compute sled may compress data with a low-latency compression algorithm prior to sending the data to a storage sled.Type: ApplicationFiled: December 30, 2016Publication date: January 25, 2018Inventors: Steven C. Miller, Vinodh Gopal, Kirk S. Yap, James D. Guilford, Wajdi K. Feghali
-
Publication number: 20180024775Abstract: Technologies for storage block virtualization include multiple computing devices in communication over an optical fabric. A computing device receives a non-volatile memory (NVM) I/O command from an application via an optical fabric interface. The NVM I/O command is indicative of one or more virtual data storage blocks. The computing device maps the virtual data storage blocks to one or more physical data storage blocks, each of which is included in a solid-state data storage device of the computing device. The computing device performs the I/O command with the physical data storage blocks and then sends a response to the application. Mapping the virtual data storage blocks may include performing one or more data services. The computing device may be embodied as a storage sled of a data center, and the application may be executed by a compute sled of the data center. Other embodiments are described and claimed.Type: ApplicationFiled: December 30, 2016Publication date: January 25, 2018Inventor: Steven C. Miller
-
Publication number: 20180024776Abstract: Technologies for managing partially synchronized writes include a managed node. The managed node is to issue a write request to write a data block, on behalf of a workload, to multiple data storage devices connected to a network, pause execution of the workload, receive an initial acknowledgment associated with one of the multiple data storage devices, wherein the initial acknowledgement is indicative of successful storage of the data block, and resume execution of the workload after receipt of the initial acknowledgement and before receipt of subsequent acknowledgements associated with any of the other data storage devices. Other embodiments are also described and claimed.Type: ApplicationFiled: December 30, 2016Publication date: January 25, 2018Inventor: Steven C. Miller
-
Publication number: 20180024764Abstract: Technologies for accelerating data writes include a managed node that includes a network interface controller that includes a power loss protected buffer and non-volatile memory. The managed node is to receive, through the network interface controller, a write request from a remote device. The write request includes a data block. The managed node is also to write the data block to the power loss protected buffer of the network interface controller, and send, in response to receipt of the data block and prior to a write of the data block to the non-volatile memory, an acknowledgement to the remote device. The acknowledgement is indicative of a successful write of the data block to the non-volatile memory. The managed node is also to write, after the acknowledgement has been sent, the data block from the power loss protected buffer to the non-volatile memory. Other embodiments are also described and claimed.Type: ApplicationFiled: December 30, 2016Publication date: January 25, 2018Inventor: Steven C. Miller
-
Publication number: 20180024740Abstract: Technologies for variable extent storage include multiple computing devices in communication over an optical fabric. A computing device receives a key-value storage request from an application that is indicative of a key. The computing device identifies one or more non-volatile storage blocks to store a value associated with the key and issues a non-volatile memory (NVM) input/output (I/O) command indicative of the NVM storage blocks to an NVM subsystem. The key-value storage request may include a read request or a store request, and the I/O command may include a read command or a write command. The I/O command may be issued to an NVM subsystem over the optical fabric. The computing device may be embodied as a storage sled of a data center, and the application may be executed by a compute sled of the data center. Other embodiments are described and claimed.Type: ApplicationFiled: December 30, 2016Publication date: January 25, 2018Inventors: Steven C. Miller, David Minturn
-
Publication number: 20180024947Abstract: Technologies for a low-latency interface with data storage of a storage sled in a data center are disclosed. In the illustrative embodiment, a storage sled stores metadata including the location of data in a storage device in low-latency non-volatile memory. When accessing data, the storage sled may access the metadata on the low-latency non-volatile memory and then, based on the location determined by the access to the metadata, access the location of the data in the storage device. Such an approach results in only one access to the data storage in order to read the data instead of two.Type: ApplicationFiled: December 30, 2016Publication date: January 25, 2018Inventor: Steven C. Miller