Patents by Inventor Scott D. Peterson
Scott D. Peterson has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230111490Abstract: Examples described herein relate to a network interface that includes an initiator device to determine a storage node associated with an access command based on an association between an address in the command and a storage node. The network interface can include a redirector to update the association based on messages from one or more remote storage nodes. The association can be based on a look-up table associating a namespace identifier with prefix string and object size. In some examples, the access command is compatible with NVMe over Fabrics. The initiator device can determine a remote direct memory access (RDMA) queue-pair (QP) lookup for use to perform the access command.Type: ApplicationFiled: October 21, 2022Publication date: April 13, 2023Inventors: Yadong LI, Scott D. PETERSON, Sujoy SEN, David B. MINTURN
-
Technologies for providing advanced management of power usage limits in a disaggregated architecture
Patent number: 11537191Abstract: Technologies for providing advanced management of power usage limits in a disaggregated architecture include a compute device. The compute device includes circuitry configured to execute operations associated with a workload in a disaggregated system. The circuitry is also configured to determine whether a present power usage of the compute device is within a predefined range of a power usage limit assigned to the compute device. Additionally, the circuitry is configured to send, to a device in the disaggregated system and in response to a determination that the present power usage of the present compute device is not within the predefined range of the power usage limit assigned to the present compute device, offer data indicative of an offer to reduce the power usage limit assigned to the present compute device to enable a second power utilization limit of another compute device in the disaggregated system to be increased.Type: GrantFiled: January 31, 2020Date of Patent: December 27, 2022Assignee: Intel CorporationInventors: Anjaneya Reddy Chagam Reddy, Scott D. Peterson, Charles Rego -
Patent number: 11509606Abstract: Examples described herein relate to a network interface that includes an initiator device to determine a storage node associated with an access command based on an association between an address in the command and a storage node. The network interface can include a redirector to update the association based on messages from one or more remote storage nodes. The association can be based on a look-up table associating a namespace identifier with prefix string and object size. In some examples, the access command is compatible with NVMe over Fabrics. The initiator device can determine a remote direct memory access (RDMA) queue-pair (QP) lookup for use to perform the access command.Type: GrantFiled: December 27, 2019Date of Patent: November 22, 2022Assignee: Intel CorporationInventors: Yadong Li, Scott D. Peterson, Sujoy Sen, David B. Minturn
-
Publication number: 20220114030Abstract: Examples described herein relate to a network interface device that includes circuitry to perform operations, offloaded from a host, to identify at least one locator of at least one target storage associated with a storage access command based on operations selected from among multiple available operations, wherein the available operations comprise two or more: entry lookup by the network interface device, hash-based calculation on the network interface device, or control plane processing on the network interface device.Type: ApplicationFiled: December 23, 2021Publication date: April 14, 2022Inventors: Salma Mirza JOHNSON, Jose NIELL, Bradley A. BURRES, Yadong LI, Scott D. PETERSON, Tony HURSON, Sujoy SEN
-
Publication number: 20220113913Abstract: Examples described herein relate to a network interface device that includes circuitry to receive storage access command and determine a processing path in the network interface device for the storage access command, wherein the processing path is within the network interface device and wherein the processing path is selected from direct mapped or control plane processed based at least on command type and source of command. In some examples, the command type is read or write.Type: ApplicationFiled: December 23, 2021Publication date: April 14, 2022Inventors: Jose NIELL, Yadong LI, Salma Mirza JOHNSON, Scott D. PETERSON, Sujoy SEN
-
Publication number: 20210326270Abstract: Examples described herein relate to a network interface device comprising circuitry to receive an access request with a target logical block address (LBA) and based on a target media of the access request storing at least one object, translate the target LBA to an address and access content in the target media based on the address. In some examples, translate the target LBA to an address includes access a translation entry that maps the LBA to one or more of: a physical address or a virtual address. In some examples, translate the target LBA to an address comprises: request a software defined storage (SDS) stack to provide a translation of the LBA to one or more of: a physical address or a virtual address and store the translation into a mapping table for access by the circuitry. In some examples, at least one entry that maps the LBA to one or more of: a physical address or a virtual address is received before receipt of an access request.Type: ApplicationFiled: June 26, 2021Publication date: October 21, 2021Inventors: Yi ZOU, Arun RAGHUNATH, Scott D. PETERSON, Sujoy SEN, Yadong LI
-
Publication number: 20210318961Abstract: Methods and apparatus for mitigating pooled memory cache miss latency with cache miss faults and transaction aborts. A compute platform coupled to one or more tiers of memory, such as remote pooled memory in a disaggregated environment executes memory transactions to access objects that are stored in the one or more tiers. A determination is made to whether a copy of the object is in a local cache on the platform; if it is, the object is accessed from the local cache. If the object is not in the local cache, a transaction abort may be generated if enabled for the transactions. Optionally, a cache miss page fault is generated if the object is in a cacheable region of a memory tier, and the transaction abort is not enabled. Various mechanisms are provided to determine what to do in response to a cache miss page fault, such as determining addresses for cache lines to prefetch from a memory tier storing the object(s), determining how much data to prefetch, and determining whether to perform a bulk transfer.Type: ApplicationFiled: June 23, 2021Publication date: October 14, 2021Inventors: Scott D. PETERSON, Sujoy SEN, Francesc GUIM BERNAT
-
Publication number: 20200341904Abstract: Technologies for accelerated memory lookups include a computing device having a processor and a hardware accelerator. The processor programs the accelerator with a search value, a start pointer, one or more predetermined offsets, and a record length. Each offset may be associated with a pointer type or a value type. The accelerator initializes a memory location at the start pointer and increments the memory location by the offset. The accelerator may read a pointer value from an offset, set the memory location to the pointer value, and repeat for additional offsets. The accelerator may read a value from the offset and compare the value to the search value. If the values match, the accelerator returns the address of the matching value to the processor. If the values do not match, the accelerator searches a next record based on the record length. Other embodiments are described and claimed.Type: ApplicationFiled: April 26, 2019Publication date: October 29, 2020Inventors: Anjaneya Reddy Chagam Reddy, Scott D. Peterson, Narayan Ranganathan
-
TECHNOLOGIES FOR PROVIDING ADVANCED MANAGEMENT OF POWER USAGE LIMITS IN A DISAGGREGATED ARCHITECTURE
Publication number: 20200166984Abstract: Technologies for providing advanced management of power usage limits in a disaggregated architecture include a compute device. The compute device includes circuitry configured to execute operations associated with a workload in a disaggregated system. The circuitry is also configured to determine whether a present power usage of the compute device is within a predefined range of a power usage limit assigned to the compute device. Additionally, the circuitry is configured to send, to a device in the disaggregated system and in response to a determination that the present power usage of the present compute device is not within the predefined range of the power usage limit assigned to the present compute device, offer data indicative of an offer to reduce the power usage limit assigned to the present compute device to enable a second power utilization limit of another compute device in the disaggregated system to be increased.Type: ApplicationFiled: January 31, 2020Publication date: May 28, 2020Inventors: Anjaneya Reddy Chagam Reddy, Scott D. Peterson, Charles Rego -
Publication number: 20200136996Abstract: Examples described herein relate to a network interface that includes an initiator device to determine a storage node associated with an access command based on an association between an address in the command and a storage node. The network interface can include a redirector to update the association based on messages from one or more remote storage nodes. The association can be based on a look-up table associating a namespace identifier with prefix string and object size. In some examples, the access command is compatible with NVMe over Fabrics. The initiator device can determine a remote direct memory access (RDMA) queue-pair (QP) lookup for use to perform the access command.Type: ApplicationFiled: December 27, 2019Publication date: April 30, 2020Inventors: Yadong LI, Scott D. PETERSON, Sujoy SEN, David B. MINTURN
-
Patent number: 10592162Abstract: Examples include methods for obtaining one or more location hints applicable to a range of logical block addresses of a received input/output (I/O) request for a storage subsystem coupled with a host system over a non-volatile memory express over fabric (NVMe-oF) interconnect. The following steps are performed for each logical block address in the I/O request. A most specific location hint of the one or more location hints that matches that logical block address is applied to identify a destination in the storage subsystem for the I/O request. When the most specific location hint is a consistent hash hint, the consistent hash hint is processed. The I/O request is forwarded to the destination and a completion status for the I/O request is returned. When a location hint log page has changed, the location hint log page is processed. When any location hint refers to NVMe-oF qualified names not included in the immediately preceding query by the discovery service, the immediately preceding query is processed again.Type: GrantFiled: August 22, 2018Date of Patent: March 17, 2020Assignee: Intel CorporationInventors: Scott D. Peterson, Sujoy Sen, Anjaneya R. Chagam Reddy, Murugasamy K. Nachimuthu, Mohan J. Kumar
-
Publication number: 20190324802Abstract: Technologies for providing efficient message polling include a compute device. The compute device includes circuitry to determine a memory location to monitor for a change indicative of a message from a device connected to a local bus of the compute device. The circuitry is also to determine whether data at the memory location satisfies reference data. Additionally, the circuitry is to process, in response to a determination that the data at the memory location satisfies the reference data, one or more messages in a message queue associated with the memory location.Type: ApplicationFiled: June 28, 2019Publication date: October 24, 2019Inventors: Anjaneya Reddy Chagam Reddy, Scott D. Peterson
-
Publication number: 20190250857Abstract: Technologies for automatic workload detection and cache quality of service (QoS) policy determination include a computing device that executes a workload. The computing device receives a data item associated with the workload, such as a file, block, or page. The computing device extracts a workload feature vector from the data item and determines a workload grouping based on the workload feature vector. The computing device determines a cache QoS policy based on the workload grouping. The cache QoS policy may be determined based on predetermined priority levels associated with workload groupings or with a machine learning model. The computing device applies the cache QoS policy to the workload. The cache QoS policy may be a guaranteed or maximum bandwidth, guaranteed or maximum I/O operation rate, maximum latency, caching mode, cache space allocation, or other cache QoS policy. Other embodiments are described and claimed.Type: ApplicationFiled: April 26, 2019Publication date: August 15, 2019Inventors: Anjaneya Reddy Chagam Reddy, Scott D. Peterson
-
Publication number: 20190042144Abstract: Examples include methods for obtaining one or more location hints applicable to a range of logical block addresses of a received input/output (I/O) request for a storage subsystem coupled with a host system over a non-volatile memory express over fabric (NVMe-oF) interconnect. The following steps are performed for each logical block address in the I/O request. A most specific location hint of the one or more location hints that matches that logical block address is applied to identify a destination in the storage subsystem for the I/O request. When the most specific location hint is a consistent hash hint, the consistent hash hint is processed. The I/O request is forwarded to the destination and a completion status for the I/O request is returned. When a location hint log page has changed, the location hint log page is processed. When any location hint refers to NVMe-oF qualified names not included in the immediately preceding query by the discovery service, the immediately preceding query is processed again.Type: ApplicationFiled: August 22, 2018Publication date: February 7, 2019Inventors: Scott D. PETERSON, Sujoy SEN, Anjaneya R. CHAGAM REDDY, Murugasamy K. NACHIMUTHU, Mohan J. KUMAR
-
Patent number: 9975742Abstract: An apparatus and methods for monitoring and/or controlling electric powered winches has sensors for measuring operational parameters of the winch. A programmed computer obtains the sensor data and issues notifications to the user of the associated parameters. The computer may be a hand-held device, such as a mobile phone, which receives the sensor data wirelessly. In some embodiments, the wireless device may also be used to issue operational commands to the winch.Type: GrantFiled: June 10, 2015Date of Patent: May 22, 2018Assignee: Superwinch, LLCInventors: Jon G. Mason, Scott D. Peterson
-
Patent number: 9341230Abstract: A tunnel-style crankshaft assembly is provided having counterweights that serve the purposes of retaining the axial motion of bearings and improving the balance of the crankshaft assembly as a result of having a structure of an increased radial profile. The counterweights can include a mass section and a bearing retaining section. When the counterweight is secured to a crank web, the mass section of the counterweight is axially spaced from a bearing assembly disposed around the web. Therefore, the mass section of the counterweight may extend radially beyond an inner race of the bearing assembly without contacting the bearing assembly's cage. The mass section of the counterweight does not extend radially beyond a radial envelope of a crankshaft support surface thereby permitting the crankshaft assembly to be slidingly inserted into the crankcase.Type: GrantFiled: March 27, 2014Date of Patent: May 17, 2016Assignee: GARDNER DENVER PETROLEUM PUMPS, LLCInventor: Scott D. Peterson
-
Publication number: 20150276015Abstract: A tunnel-style crankshaft assembly is provided having counterweights that serve the purposes of retaining the axial motion of bearings and improving the balance of the crankshaft assembly as a result of having a structure of an increased radial profile. The counterweights can include a mass section and a bearing retaining section. When the counterweight is secured to a crank web, the mass section of the counterweight is axially spaced from a bearing assembly disposed around the web. Therefore, the mass section of the counterweight may extend radially beyond an inner race of the bearing assembly without contacting the bearing assembly's cage. The mass section of the counterweight does not extend radially beyond a radial envelope of a crankshaft support surface thereby permitting the crankshaft assembly to be slidingly inserted into the crankcase.Type: ApplicationFiled: March 27, 2014Publication date: October 1, 2015Inventor: Scott D. PETERSON
-
Patent number: 6692782Abstract: Potato products that retain shred integrity and that are sufficiently thin to fit into a standard toaster have been produced. In some embodiments, the potato products contain a filling. The potato products contain a network of shredded potatoes that enables the potato products to retain structural integrity during production and further manipulation of the product. Extrusion and sheeting methods are used to obtain potato products that retain the desirable shred integrity. A method and apparatus is provided for simultaneously cutting and crimping individual food items from a filled extruded or sheeted product.Type: GrantFiled: August 28, 2000Date of Patent: February 17, 2004Assignee: The Pillsbury CompanyInventors: Susan M. Hayes-Jacobson, Scott D. Peterson