Patents by Inventor Steven C. Miller

Steven C. Miller has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11226730
    Abstract: An infotainment computer device for use in controlling an infotainment system in a vehicle is provided. The infotainment computer device includes at least one processor communicatively coupled to at least one memory device and a display device communicatively coupled to the at least one processor. The infotainment computer device is programmed to display an active page and a toolbar. The active page includes a plurality of buttons and the toolbar includes a shortcut area including at least one shortcut button. The infotainment computer device is also programmed to receive a first input requesting access to a customization mode, retrieve a current speed of the vehicle, activate the customization mode if the current speed of the vehicle is zero, receive a second input indicating a desired change to at least one of the active page and the toolbar, and change the display based on the desired change.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: January 18, 2022
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Steven Feit, David A. Vanderburgh, Ross C. Miller, Koo Ho Shin
  • Patent number: 11219519
    Abstract: A suspensory fixation device has an elongated anchor member adapted to be transversely situated at the exit of a bone tunnel. A graft supporting loop member formed of a pair of parallel suture limbs extending from a bight portion is suspended transversely from the anchor member and has a loop length which is adjustable so the graft ligament can be supported in the bone tunnel at varying distances from the anchor member. When a graft ligament is attached to the saddle end of the loop member, the length may be shortened by pulling distally on the pair of limbs to pull the graft ligament into the bone tunnel. When tension is applied to the loop member by the graft pulling the loop proximally, the bight portion of the suture automatically locks the graft supporting loop member in place.
    Type: Grant
    Filed: May 20, 2020
    Date of Patent: January 11, 2022
    Assignee: Linvatec Corporation
    Inventors: Andrew Kam, Giuseppe Lombardo, Peter C. Miller, Steven E. Fitts
  • Publication number: 20200363998
    Abstract: An embodiment of an electronic apparatus may include one or more substrates, and logic coupled to the one or more substrates, the logic to maintain a respective lookup table for each of two or more persistent storage devices in a persistent memory outside of the two or more persistent storage devices with a first indirection granularity that is smaller than a second indirection granularity of each of the two or more persistent storage devices, buffer write requests to the two or more persistent storage devices in the persistent memory in accordance with the respective lookup tables, and perform a sequential write from the persistent memory to a particular device of the two or more persistent storage devices when a portion of the buffer that corresponds to the particular device has an amount of data to write that corresponds to the second indirection granularity. Other embodiments are disclosed and claimed.
    Type: Application
    Filed: August 7, 2020
    Publication date: November 19, 2020
    Applicant: Intel Corporation
    Inventors: Benjamin Walker, Sanjeev Trika, Kapil Karkra, James R. Harris, Steven C. Miller, Bishwajit Dutta
  • Patent number: 10761779
    Abstract: Techniques enable offloading operations to be performed closer to where the data is stored in systems with sharded and erasure-coded data, such as in data centers. In one example, a system includes a compute sled or compute node, which includes one or more processors. The system also includes a storage sled or storage node. The storage node includes one or more storage devices. The storage node stores at least one portion of data that is sharded and erasure-coded. Other portions of the data are stored on other storage nodes. The compute node sends a request to offload an operation to the storage node to access the sharded and erasure-coded data. The storage node then sends a request to offload the operation to one or more other storage nodes determined to store one or more codes of the data. The storage nodes perform the operation on the portions of locally stored data and provide the results to the next-level up node.
    Type: Grant
    Filed: December 5, 2018
    Date of Patent: September 1, 2020
    Assignee: Intel Corporation
    Inventors: Sanjeev N. Trika, Steven C. Miller
  • Patent number: 10542333
    Abstract: Technologies for a low-latency interface with data storage of a storage sled in a data center are disclosed. In the illustrative embodiment, a storage sled stores metadata including the location of data in a storage device in low-latency non-volatile memory. When accessing data, the storage sled may access the metadata on the low-latency non-volatile memory and then, based on the location determined by the access to the metadata, access the location of the data in the storage device. Such an approach results in only one access to the data storage in order to read the data instead of two.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: January 21, 2020
    Assignee: Intel Corporation
    Inventor: Steven C. Miller
  • Patent number: 10411729
    Abstract: Technologies for allocating ephemeral data storage among managed nodes include an orchestrator server to receive ephemeral data storage availability information from the managed nodes, receive a request from a first managed node of the managed nodes to allocate an amount of ephemeral data storage as the first managed node executes one or more workloads, determine, as a function of the ephemeral data storage availability information, an availability of the requested amount of ephemeral data storage, and allocate, in response to a determination that the requested amount of ephemeral data storage is available from one or more other managed nodes, the requested amount of ephemeral data storage to the first managed node as the first managed node executes the one or more workloads. Other embodiments are also described and claimed.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: September 10, 2019
    Assignee: Intel Corporation
    Inventors: Steven C. Miller, David B. Minturn
  • Patent number: 10334334
    Abstract: Examples may include a sled for a rack of a data center including physical storage resources. The sled comprises an array of storage devices and an array of memory. The storage devices and memory are directly coupled to storage resource processing circuits which are themselves, directly coupled to dual-mode optical network interface circuitry. The circuitry can store data on the storage devices and metadata associated with the data on non-volatile memory in the memory array.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: June 25, 2019
    Assignee: INTEL CORPORATION
    Inventors: Steven C. Miller, Michael Crocker, Aaron Gorius, Paul Dormitzer
  • Patent number: 10313769
    Abstract: Technologies for managing partially synchronized writes include a managed node. The managed node is to issue a write request to write a data block, on behalf of a workload, to multiple data storage devices connected to a network, pause execution of the workload, receive an initial acknowledgment associated with one of the multiple data storage devices, wherein the initial acknowledgement is indicative of successful storage of the data block, and resume execution of the workload after receipt of the initial acknowledgement and before receipt of subsequent acknowledgements associated with any of the other data storage devices. Other embodiments are also described and claimed.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: June 4, 2019
    Assignee: Intel Corporation
    Inventor: Steven C. Miller
  • Publication number: 20190114114
    Abstract: Techniques enable offloading operations to be performed closer to where the data is stored in systems with sharded and erasure-coded data, such as in data centers. In one example, a system includes a compute sled or compute node, which includes one or more processors. The system also includes a storage sled or storage node. The storage node includes one or more storage devices. The storage node stores at least one portion of data that is sharded and erasure-coded. Other portions of the data are stored on other storage nodes. The compute node sends a request to offload an operation to the storage node to access the sharded and erasure-coded data. The storage node then sends a request to offload the operation to one or more other storage nodes determined to store one or more codes of the data. The storage nodes perform the operation on the portions of locally stored data and provide the results to the next-level up node.
    Type: Application
    Filed: December 5, 2018
    Publication date: April 18, 2019
    Inventors: Sanjeev N. TRIKA, Steven C. MILLER
  • Publication number: 20190042365
    Abstract: Examples include techniques for performing read-optimized lazy erasure encoding of data streams. An embodiment includes receiving a request to write a stream of data, separating the stream into a first plurality of extents, storing a primary replica and one or more additional replicas of each extent of the separated stream to a plurality of data storage nodes, and updating a list of extents to be erasure encoded. The embodiment further includes when an erasure encoded stripe can be created, getting the data for each of the extents of the erasure encoded stripe, calculating parity extents for unencoded extents of the erasure encoded stripe, writing the parity extents to a second plurality of data storage nodes, and deleting the one or more additional replicas of the extents of the erasure encoded stripe from the first plurality of data storage nodes.
    Type: Application
    Filed: September 26, 2018
    Publication date: February 7, 2019
    Inventors: Kimberly A. MALONE, Steven C. MILLER
  • Patent number: 10091904
    Abstract: Examples may include a sled for a rack of a data center including physical storage resources. The sled comprises an array of storage devices and an array of memory. The storage devices and memory are directly coupled to storage resource processing circuits which are themselves, directly coupled to dual-mode optical network interface circuitry. The dual-mode optical network interface circuitry can have a bandwidth equal to or greater than the storage devices.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: October 2, 2018
    Assignee: INTEL CORPORATION
    Inventors: Steven C. Miller, Michael Crocker, Aaron Gorius, Paul Dormitzer
  • Patent number: 10034407
    Abstract: Examples may include a sled for a rack of a data center including physical storage resources. The sled comprises mounting flanges to enable robotic insertion and removal from a rack and storage device mounting slots to enable robotic insertion and removal of storage devices into the sled. The storage devices are coupled to an optical fabric through storage resource controllers and a dual-mode optical network interface.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: July 24, 2018
    Assignee: Intel Corporation
    Inventors: Steven C. Miller, Michael Crocker, Aaron Gorius, Paul Dormitzer
  • Publication number: 20180024764
    Abstract: Technologies for accelerating data writes include a managed node that includes a network interface controller that includes a power loss protected buffer and non-volatile memory. The managed node is to receive, through the network interface controller, a write request from a remote device. The write request includes a data block. The managed node is also to write the data block to the power loss protected buffer of the network interface controller, and send, in response to receipt of the data block and prior to a write of the data block to the non-volatile memory, an acknowledgement to the remote device. The acknowledgement is indicative of a successful write of the data block to the non-volatile memory. The managed node is also to write, after the acknowledgement has been sent, the data block from the power loss protected buffer to the non-volatile memory. Other embodiments are also described and claimed.
    Type: Application
    Filed: December 30, 2016
    Publication date: January 25, 2018
    Inventor: Steven C. Miller
  • Publication number: 20180024752
    Abstract: Technologies for low-latency compression in a data center are disclosed. In the illustrative embodiment, a storage sled compresses data with a low-latency compression algorithm prior to storing the data. The latency of the compression algorithm is less than the latency of the storage device, so that the latency of the storage and retrieval times are not significantly affected by the compression and decompression. In other embodiments, a compute sled may compress data with a low-latency compression algorithm prior to sending the data to a storage sled.
    Type: Application
    Filed: December 30, 2016
    Publication date: January 25, 2018
    Inventors: Steven C. Miller, Vinodh Gopal, Kirk S. Yap, James D. Guilford, Wajdi K. Feghali
  • Publication number: 20180027684
    Abstract: Examples may include a sled for a rack of a data center including physical storage resources. The sled comprises an array of storage devices and an array of memory. The storage devices and memory are directly coupled to storage resource processing circuits which are themselves, directly coupled to dual-mode optical network interface circuitry. The dual-mode optical network interface circuitry can have a bandwidth equal to or greater than the storage devices.
    Type: Application
    Filed: December 29, 2016
    Publication date: January 25, 2018
    Applicant: INTEL CORPORATION
    Inventors: STEVEN C. MILLER, MICHAEL CROCKER, AARON GORIUS, PAUL DORMITZER
  • Publication number: 20180027059
    Abstract: Technologies for managing distributed data to improve data throughput rates include a managed node to distribute a dataset over multiple data storage devices coupled to a network. Each data storage device has a peak data throughput rate. The managed node is further to request a corresponding portion of the dataset from each data storage device, receive the requested portions of the dataset at a combined data throughput rate that is greater than the peak data throughput rate of any of the data storage devices, and combine the received portions of the dataset to reconstruct the dataset. Other embodiments are also described and claimed.
    Type: Application
    Filed: December 30, 2016
    Publication date: January 25, 2018
    Inventor: Steven C. Miller
  • Publication number: 20180024740
    Abstract: Technologies for variable extent storage include multiple computing devices in communication over an optical fabric. A computing device receives a key-value storage request from an application that is indicative of a key. The computing device identifies one or more non-volatile storage blocks to store a value associated with the key and issues a non-volatile memory (NVM) input/output (I/O) command indicative of the NVM storage blocks to an NVM subsystem. The key-value storage request may include a read request or a store request, and the I/O command may include a read command or a write command. The I/O command may be issued to an NVM subsystem over the optical fabric. The computing device may be embodied as a storage sled of a data center, and the application may be executed by a compute sled of the data center. Other embodiments are described and claimed.
    Type: Application
    Filed: December 30, 2016
    Publication date: January 25, 2018
    Inventors: Steven C. Miller, David Minturn
  • Publication number: 20180024771
    Abstract: Examples may include a sled for a rack of a data center including physical storage resources. The sled comprises an array of storage devices and an array of memory. The storage devices and memory are directly coupled to storage resource processing circuits which are themselves, directly coupled to dual-mode optical network interface circuitry. The circuitry can store data on the storage devices and metadata associated with the data on non-volatile memory in the memory array.
    Type: Application
    Filed: December 29, 2016
    Publication date: January 25, 2018
    Applicant: INTEL CORPORATION
    Inventors: STEVEN C. MILLER, MICHAEL CROCKER, AARON GORIUS, PAUL DORMITZER
  • Publication number: 20180027685
    Abstract: Examples may include a sled for a rack of a data center including physical storage resources. The sled comprises mounting flanges to enable robotic insertion and removal from a rack and storage device mounting slots to enable robotic insertion and removal of storage devices into the sled. The storage devices are coupled to an optical fabric through storage resource controllers and a dual-mode optical network interface.
    Type: Application
    Filed: December 29, 2016
    Publication date: January 25, 2018
    Applicant: INTEL CORPORATION
    Inventors: STEVEN C. MILLER, MICHAEL CROCKER, AARON GORIUS, PAUL DORMITZER
  • Publication number: 20180024775
    Abstract: Technologies for storage block virtualization include multiple computing devices in communication over an optical fabric. A computing device receives a non-volatile memory (NVM) I/O command from an application via an optical fabric interface. The NVM I/O command is indicative of one or more virtual data storage blocks. The computing device maps the virtual data storage blocks to one or more physical data storage blocks, each of which is included in a solid-state data storage device of the computing device. The computing device performs the I/O command with the physical data storage blocks and then sends a response to the application. Mapping the virtual data storage blocks may include performing one or more data services. The computing device may be embodied as a storage sled of a data center, and the application may be executed by a compute sled of the data center. Other embodiments are described and claimed.
    Type: Application
    Filed: December 30, 2016
    Publication date: January 25, 2018
    Inventor: Steven C. Miller