Patents Examined by Steven G Snyder
-
Patent number: 11675501Abstract: At a data stream management service, a first set of metadata indicating that a first isolated read channel has been associated with a first data stream is stored. The first isolated read channel has an associated read performance limit setting. A second set of metadata indicating that a second isolated read channel, with its own performance limit setting, has been associated with a data stream is also stored. Based on determining that the difference between a metric of read operations associated with the first channel and the read performance limit setting of the first channel meets a first criterion, the service initiates a throttling operation for reads associated with the first channel. The throttling decision is made independently of read metrics of the second channel.Type: GrantFiled: September 4, 2020Date of Patent: June 13, 2023Assignee: Amazon Technologies, Inc.Inventors: Vasudeva Gade, Benjamin Warren Mercier, Sayantan Chakravorty, Yasemin Avcular, Charlie Paucard
-
Patent number: 11669424Abstract: Embodiments are disclosed for automated evaluation of compatibility of a data structure with a user device. An example method includes receiving, by communications circuitry, a set of user device characteristics regarding the user device, and retrieving, by the personalization circuitry, a set of data structure characteristics regarding the data structure. The example method further includes calculating, by the personalization circuitry, a set of characteristic-level compatibility scores, and generating, by the personalization circuitry and based on the set of characteristic-level compatibility scores, a compatibility score for the data structure and the user device. Subsequently, the example method includes generating, by an aggregator and using the generated compatibility score, an indication of relative compatibility of the data structure for the user device, and causing transmission, by communications circuitry, of a control signal to the user device based on the indication of relative compatibility.Type: GrantFiled: April 8, 2021Date of Patent: June 6, 2023Assignee: GROUPON, INC.Inventors: Raju Balakrishnan, Vinay K. Deolalikar, Matthew M. Heitz
-
Patent number: 11669480Abstract: In an embodiment, an SOC includes a global communication fabric that includes multiple independent networks having different communication and coherency protocols, and a plurality of input-output (I/O) clusters that includes different sets of local functional circuits. A given I/O cluster may be coupled to one or more of the independent networks and may include a particular set of local functional circuits, a local fabric coupled to the particular set of local functional circuits, and an interface circuit coupled to the local fabric and configured to bridge transactions between the particular set of local functional circuits and the global communication fabric. The interface circuit may include a programmable hardware transaction generator circuit configured to generate a set of test transactions that simulate interactions between the particular set of local functional circuits and a particular one of the one or more independent networks.Type: GrantFiled: May 13, 2021Date of Patent: June 6, 2023Assignee: Apple Inc.Inventors: Igor Tolchinsky, Charles J. Fleckenstein, Sagi Lahav, Lital Levy-Rubin
-
Patent number: 11656871Abstract: An input/output store instruction is handled. A data processing system includes a system nest communicatively coupled to at least one input/output bus by an input/output bus controller. The data processing system further includes at least a data processing unit including a core, system firmware and an asynchronous core-nest interface. The data processing unit is communicatively coupled to the system nest via an aggregation buffer. The system nest is configured to asynchronously load from and/or store data to an external device which is communicatively coupled to the input/output bus. The data processing unit is configured to complete the input/output store instruction before an execution of the input/output store instruction in the system nest is completed.Type: GrantFiled: September 21, 2021Date of Patent: May 23, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Christoph Raisch, Marco Kraemer, Frank Siegfried Lehnert, Matthias Klein, Jonathan D. Bradbury, Christian Jacobi, Brenton Belmar, Peter Dana Driever
-
Patent number: 11652095Abstract: A discrete three-dimensional (3-D) processor comprises stacked first and second dice. The first die comprises 3-D memory (3D-M) arrays, whereas the second die comprises logic circuits and at least an off-die peripheral-circuit component of the 3D-M array(s). In one preferred embodiment, the first and second dice are vertically stacked. In another preferred embodiment, the first and second dice are face-to-face bonded.Type: GrantFiled: October 12, 2022Date of Patent: May 16, 2023Assignees: HangZhou HaiCun Information Technology Co., Ltd.Inventor: Guobiao Zhang
-
Patent number: 11640301Abstract: Systems and methods are disclosed for duplicate detection for register renaming. For example, a method includes checking a map table for duplicates of a first physical register, wherein the map table stores entries that each map an architectural register of an instruction set architecture to a physical register of a microarchitecture and a duplicate is two or more architectural registers that are mapped to a same physical register; and, responsive to a duplicate of the first physical register in the map table, preventing the first physical register from being added to a free list upon retirement of an instruction that renames an architectural register that was previously mapped to the first physical register to a different physical register, wherein the free list stores entries that indicate which physical registers are available for renaming.Type: GrantFiled: April 4, 2022Date of Patent: May 2, 2023Assignee: SiFive, Inc.Inventor: Joshua Smith
-
Patent number: 11621868Abstract: A device for a serial bus system. The device includes a receiver for receiving a signal from a bus, in which for a message that is exchanged between user stations of the bus system, the bus states of a signal received from the bus in the first communication phase differ from bus states of the signal received in the second communication phase. The receiver generates a digital signal based on the received signal, and outputs the signal to a communication control device to evaluate the data. The receiver uses a first reception threshold and a second reception threshold in the second communication phase to generate the digital signal. The second reception threshold has a negative voltage value or has a voltage value that is greater than the largest voltage value that is driven by a user station of the bus system for a bus state in the second communication phase.Type: GrantFiled: December 11, 2019Date of Patent: April 4, 2023Assignee: ROBERT BOSCH GMBHInventors: Florian Hartwich, Arthur Mutter, Steffen Walker
-
Patent number: 11615307Abstract: Techniques for mixed-precision data manipulation for neural network data computation are disclosed. A first left group comprising eight bytes of data and a first right group of eight bytes of data are obtained for computation using a processor. A second left group comprising eight bytes of data and a second right group of eight bytes of data are obtained. A sum of products is performed between the first left and right groups and the second left and right groups. The sum of products is performed on bytes of 8-bit integer data. A first result is based on a summation of eight values that are products of the first group's left eight bytes and the second group's left eight bytes. A second result is based on the summation of eight values that are products of the first group's left eight bytes and the second group's right eight bytes. Results are output.Type: GrantFiled: August 5, 2020Date of Patent: March 28, 2023Assignee: MIPS Tech, LLCInventors: James Hippisley Robinson, Sanjay Patel
-
Patent number: 11593016Abstract: Techniques are provided for serializing replication operations. A plurality of operations are implemented upon a first storage object and are replicated as a plurality of replication operations. An order with which the plurality of replication operation are to be executed upon a second storage object is determined. Execution of the plurality of replication operations upon the second storage object is serialized according to the order.Type: GrantFiled: July 28, 2020Date of Patent: February 28, 2023Assignee: NetApp, Inc.Inventors: Akhil Kaushik, Anoop Chakkalakkal Vijayan, Krishna Murthy Chandraiah setty Narasingarayanapeta, Shrey Sengar
-
Patent number: 11593114Abstract: Methods, systems and apparatuses for performing walk operations of single instruction, multiple data (SIMD) instructions are disclosed. One method includes initiating, by a scheduler, a SIMD thread, where the scheduler is operative to schedule the SIMD thread. The method further includes fetching a plurality of instructions for the SIMD thread. The method further includes determining, by a thread arbiter, at least one instruction that is a walk instruction, where the walk instruction iterates a block of instructions for a subset of channels of the SIMD thread, where the walk instruction includes a walk size, and where the walk size is a number of channels in the subset of channels of the SIMD thread that are processed in a walk iteration in association with the walk instruction. The method further includes executing the walk instruction based on the walk size.Type: GrantFiled: March 16, 2022Date of Patent: February 28, 2023Assignee: Blaize, Inc.Inventors: Satyaki Koneru, Kamaraj Thangam
-
Patent number: 11586194Abstract: Systems, methods and apparatus of optimizing neural network computations of predictive maintenance of vehicles. For example, a data storage device of a vehicle includes: a host interface configured to receive a sensor data stream from at least one sensor configured on the vehicle; at least one storage media component having a non-volatile memory; and a controller. The non-volatile memory is configured into multiple partitions (e.g., namespaces) having different sets of memory operation settings configured for different types of data related to an artificial neural network (ANN). The partitions include a model partition configured to store model data of the ANN. The sensor data stream is applied in the ANN to predict a maintenance service of the vehicle. The memory units of the model partition can be configured for read, infrequent updates, improved storage capacity, and/or for access in parallel with input/output for the ANN.Type: GrantFiled: August 12, 2019Date of Patent: February 21, 2023Assignee: Micron Technology, Inc.Inventors: Robert Richard Noel Bielby, Poorna Kale
-
Patent number: 11586441Abstract: Systems, apparatuses, and methods for virtualizing a micro-operation cache are disclosed. A processor includes at least a micro-operation cache, a conventional cache subsystem, a decode unit, and control logic. The decode unit decodes instructions into micro-operations which are then stored in the micro-operation cache. The micro-operation cache has limited capacity for storing micro-operations. When new micro-operations are decoded from pending instructions, existing micro-operations are evicted from the micro-operation cache to make room for the new micro-operations. Rather than being discarded, micro-operations evicted from the micro-operation cache are stored in the conventional cache subsystem. This prevents the original instruction from having to be decoded again on subsequent executions.Type: GrantFiled: December 17, 2020Date of Patent: February 21, 2023Assignee: Advanced Micro Devices, Inc.Inventors: John Kalamatianos, Jagadish B. Kotra
-
Patent number: 11567762Abstract: Interactions between a classical computing system and a quantum computing system can be structured to increase the effective memory available to hold instructions for a quantum processor. The system stores a schedule of compiled quantum processing instructions in a memory storage location on a classical computing system. A small program memory is included in close proximity to a control system for the quantum processor on the quantum computing system. The classical computing system sends a subset of instructions from the schedule of quantum instructions to the program memory. The control system manages execution of the instructions by accessing them at the program memory and configuring the quantum processor accordingly. While the quantum processor executes the instructions, additional instructions are transferred from the classical computing system to the program memory to await execution.Type: GrantFiled: December 2, 2021Date of Patent: January 31, 2023Assignee: Rigetti & Co, LLCInventor: Robert Stanley Smith
-
Patent number: 11567765Abstract: Embodiments detailed herein relate to matrix operations. In particular, the loading of a matrix (tile) from memory. For example, support for a loading instruction is described in the form of decode circuitry to decode an instruction having fields for an opcode, a destination matrix operand identifier, and source memory information, and execution circuitry to execute the decoded instruction to load groups of strided data elements from memory into configured rows of the identified destination matrix operand to memory.Type: GrantFiled: July 1, 2017Date of Patent: January 31, 2023Assignee: Intel CorporationInventors: Robert Valentine, Menachem Adelman, Milind B. Girkar, Zeev Sperber, Mark J. Charney, Bret L. Toll, Rinat Rappoport, Jesus Corbal, Stanislav Shwartsman, Dan Baum, Igor Yanover, Alexander F. Heinecke, Barukh Ziv, Elmoustapha Ould-Ahmed-Vall, Yuri Gebil
-
Patent number: 11567782Abstract: Various systems and methods for configuring a pluggable computing device are described herein. A pluggable computing device may be configured to be compatible with a pluggable host system using a default communication channel to obtain configuration settings and configure a programmable logic device on the pluggable computing device. The pluggable computing device may perform chain of trust processing on the pluggable host system. The pluggable computing device may be disposed on a compute card, which may include a heat sink in a particular configuration.Type: GrantFiled: January 3, 2020Date of Patent: January 31, 2023Assignee: Intel CorporationInventors: Yen Hsiang Chew, Eng Choon Tan
-
Patent number: 11567771Abstract: A system for processing gather and scatter instructions can implement a front-end subsystem, a back-end subsystem, or both. The front-end subsystem includes a prediction unit configured to determine a predicted quantity of coalesced memory access operations required by an instruction. A decode unit converts the instruction into a plurality of access operations based on the predicted quantity, and transmits the plurality of access operations and an indication of the predicted quantity to an issue queue. The back-end subsystem includes a load-store unit that receives a plurality of access operations corresponding to an instruction, determines a subset of the plurality of access operations that can be coalesced, and forms a coalesced memory access operation from the subset. A queue stores multiple memory addresses for a given load-store entry to provide for execution of coalesced memory accesses.Type: GrantFiled: July 30, 2020Date of Patent: January 31, 2023Assignees: Marvell Asia PTE, LTD., Cray Inc.Inventors: Harold Wade Cain, III, Nagesh Bangalore Lakshminarayana, Daniel Jonathan Ernst, Sanyam Mehta
-
Patent number: 11562231Abstract: A neural network architecture is used that reduces the processing load of implementing the neural network. This network architecture may thus be used for reduced-bit processing devices. The architecture may limit the number of bits used for processing and reduce processing to prevent data overflow at individual calculations of the neural network. To implement this architecture, the number of bits used to represent inputs at levels of the network and the related filter masks may also be modified to ensure the number of bits of the output does not overflow the resulting capacity of the reduced-bit processor. To additionally reduce the load for such a network, the network may implement a “starconv” structure that permits the incorporation of nearby nodes in a layer to balance processing requirements and permit the network to learn from context of other nodes.Type: GrantFiled: September 3, 2019Date of Patent: January 24, 2023Assignee: Tesla, Inc.Inventors: Forrest Nelson Iandola, Harsimran Singh Sidhu, Yiqi Hou
-
Patent number: 11561796Abstract: A computer-implemented method to prefetch non-sequential instruction addresses (I/A) includes, determining, by a prefetch system, a first access attempt of a first I/A in a cache is a first miss, wherein the first I/A is included in a string of I/A's. The method further includes storing the first I/A in a linked miss-to-miss (LMTM) table. The method also includes determining a second access attempt of a second I/A in the cache is a second miss, wherein the second I/A is included in the string of I/A's. The method includes linking, in the LMTM table, the second miss to the first miss. The method also includes prefetching, in response to a third access attempt of the first I/A, the second I/A in the cache.Type: GrantFiled: July 15, 2020Date of Patent: January 24, 2023Assignee: International Business Machines CorporationInventors: Naga P. Gorti, Mohit Karve
-
Patent number: 11556342Abstract: Techniques are disclosed for utilizing configurable delays in an instruction stream. A set of instructions to be executed on a set of engines are generated. The set of engines are distributed between a set of hardware elements. A set of configurable delays are inserted into the set of instructions. Each of the set of configurable delays includes an adjustable delay amount that delays an execution of the set of instructions on the set of engines. The adjustable delay amount is adjustable by a runtime application that facilitates the execution of the set of instructions on the set of engines. The runtime application is configured to determine a runtime condition associated with the execution of the set of instructions on the set of engines and to adjust the set of configurable delays based on the runtime condition.Type: GrantFiled: September 24, 2020Date of Patent: January 17, 2023Assignee: Amazon Technologies, Inc.Inventors: Ravi Kumar, Drazen Borkovic
-
Patent number: 11556341Abstract: Systems, methods, and apparatuses relating to instructions to compartmentalize memory accesses and execution (e.g., non-speculative and speculative) are described.Type: GrantFiled: June 7, 2021Date of Patent: January 17, 2023Assignee: Intel CorporationInventors: Ravi Sahita, Deepak Gupta, Vedvyas Shanbhogue, David Hansen, Jason W. Brandt, Joseph Nuzman, Mingwei Zhang