Patents Examined by Cheng-Yuan Tseng
  • Patent number: 11609868
    Abstract: One example system for preventing data loss during memory blackout events comprises a memory device, a sensor, and a controller operably coupled to the memory device and the sensor. The controller is configured to perform one or more operations that coordinate at least one memory blackout event of the memory device and at least one data transmission of the sensor.
    Type: Grant
    Filed: December 31, 2020
    Date of Patent: March 21, 2023
    Assignee: Waymo LLC
    Inventors: Sabareeshkumar Ravikumar, Daniel Rosenband
  • Patent number: 11604743
    Abstract: Described are techniques including a method comprising detecting a deallocated Input/Output (I/O) queue associated with a first entity in a Non-Volatile Memory Express (NVMe) storage system. The method further comprises broadcasting an Asynchronous Event Request (AER) message indicating I/O queue availability based on the deallocated I/O queue. The method further comprises allocating, in response to the AER message, a new I/O queue to a second entity in the NVMe storage system.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: March 14, 2023
    Assignee: International Business Machines Corporation
    Inventors: Kushal S. Patel, Sarvesh S. Patel, Subhojit Roy
  • Patent number: 11604652
    Abstract: A digital signal processor having at least one streaming address generator, each with dedicated hardware, for generating addresses for writing multi-dimensional streaming data that comprises a plurality of elements. Each at least one streaming address generator is configured to generate a plurality of offsets to address the streaming data, and each of the plurality of offsets corresponds to a respective one of the plurality of elements. The address of each of the plurality of elements is the respective one of the plurality of offsets combined with a base address.
    Type: Grant
    Filed: February 1, 2021
    Date of Patent: March 14, 2023
    Assignee: Texas Instruments Incorporated
    Inventors: Timothy David Anderson, Duc Quang Bui, Joseph Zbiciak, Sahithi Krishna, Soujanya Narnur
  • Patent number: 11604973
    Abstract: Some embodiments provide a method for training parameters of a machine-trained (MT) network. The method receives an MT network with multiple layers of nodes, each of which computes an output value based on a set of input values and a set of trained weight values. Each layer has a set of allowed weight values. For a first layer with a first set of allowed weight values, the method defines a second layer with nodes corresponding to each of the nodes of the first layer, each second-layer node receiving the same input values as the corresponding first-layer node. The second layer has a second, different set of allowed weight values, with the output values of the nodes of the first layer added with the output values of the corresponding nodes of the second layer to compute output values that are passed to a subsequent layer. The method trains the weight values.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: March 14, 2023
    Assignee: PERCEIVE CORPORATION
    Inventors: Eric A. Sather, Steven L. Teig
  • Patent number: 11599360
    Abstract: A synaptic coprocessor may include a memory configured to store a plurality of Very Long Data Words, each as a test Very Long Data Word (VLDW) having a length in the range of about one thousand bits to one million or more bits and containing encoded information that is distributed across the length of the VLDW. A processor generates search terms and a processing logic unit receives a test VLDW from the memory, receives a search term from the processor, and computes a Boolean inner product between the search term and the test VLDW read from memory indicative of the measure of similarity between the test VLDW and the search term. Optionally, buffers within logic circuits of processing pipelines may receive the test VLDWs.
    Type: Grant
    Filed: April 28, 2021
    Date of Patent: March 7, 2023
    Assignee: COGNITIVE SCIENCE & SOLUTIONS, INC.
    Inventors: David Sherwood, Terry A. Higbee
  • Patent number: 11593282
    Abstract: A dual memory Secure Digital (SD) card is provided which allows for remote data updates without disruption to a currently executing program, as well as a system and method that utilize the dual memory SD card. The dual memory SD card may include a primary memory, an independent secondary memory, and a microcontroller or Application Specific Integrated Circuit (ASIC) that can load either memory upon boot up of a host computer. The dual memory SD card may also include a wireless interface, such as Wi-Fi or Bluetooth, in addition to a standard SD pin interface. An automated data synchronization system is provided which allows a new version of data to be uploaded onto the secondary memory of the dual memory SD card while an existing data version is running on that same dual memory SD card and swapped into operation upon the next reboot of a host device.
    Type: Grant
    Filed: May 12, 2021
    Date of Patent: February 28, 2023
    Inventor: Francesco E DeAngelis
  • Patent number: 11586907
    Abstract: Embodiments of a device include an integrated circuit, a reconfigurable stream switch formed in the integrated circuit, and an arithmetic unit coupled to the reconfigurable stream switch. The arithmetic unit has a plurality of inputs and at least one output, and the arithmetic unit is solely dedicated to performance of a plurality of parallel operations. Each one of the plurality of parallel operations carries out a portion of the formula: output=AX+BY+C.
    Type: Grant
    Filed: February 20, 2019
    Date of Patent: February 21, 2023
    Assignees: STMICROELECTRONICS S.r.l., STMICROELECTRONICS INTERNATIONAL N.V.
    Inventors: Surinder Pal Singh, Giuseppe Desoli, Thomas Boesch
  • Patent number: 11586440
    Abstract: A computer-implemented method of performing a link stack based prefetch augmentation using a sequential prefetching includes observing a call instruction in a program being executed, and pushing a return address onto a link stack for processing the next instruction. A stream of instructions is prefetched starting from a cached line address of the next instruction and is stored in an instruction cache.
    Type: Grant
    Filed: June 1, 2021
    Date of Patent: February 21, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Naga P. Gorti, Mohit Karve
  • Patent number: 11579871
    Abstract: Embodiments of systems, apparatuses, and methods for performing vector-packed controllable sine and/or cosine operations in a processor are described. For example, execution circuitry executes a decoded instruction to compute at least a real output value and an imaginary output value based on at least a cosine calculation and a sine calculation, the cosine and sine calculations each based on an index value from a packed data source operand, add the index value with an index increment value from the packed data source operand to create an updated index value, and store the real output value, the imaginary output value, and the updated index value to a packed data destination operand.
    Type: Grant
    Filed: June 14, 2021
    Date of Patent: February 14, 2023
    Assignee: Intel Corporation
    Inventors: Venkateswara R. Madduri, Elmoustapha Ould-Ahmed-Vall, Robert Valentine, Jesus Corbal, Mark J. Charney, Carl Murray, Milind Girkar, Bret Toll
  • Patent number: 11580038
    Abstract: A high-capacity system memory may be built from both quasi-volatile (QV) memory circuits, logic circuits, and static random-access memory (SRAM) circuits. Using the SRAM circuits as buffers or cache for the QV memory circuits, the system memory may achieve access latency performance of the SRAM circuits and may be used as code memory. The system memory is also capable of direct memory access (DMA) operations and includes an arithmetic logic unit for performing computational memory tasks. The system memory may include one or more embedded processor. In addition, the system memory may be configured for multi-channel memory accesses by multiple host processors over multiple host ports. The system memory may be provided in the dual-in-line memory module (DIMM) format.
    Type: Grant
    Filed: February 5, 2021
    Date of Patent: February 14, 2023
    Assignee: SUNRISE MEMORY CORPORATION
    Inventors: Robert D. Norman, Eli Harari, Khandker Nazrul Quader, Frank Sai-keung Lee, Richard S. Chernicoff, Youn Cheul Kim, Mehrdad Mofidi
  • Patent number: 11574100
    Abstract: Systems, devices, and methods related to a Deep Learning Accelerator and memory are described. For example, an integrated sensor device may be configured to execute instructions with matrix operands and configured with: a sensor to generate measurements of stimuli; random access memory to store instructions executable by the Deep Learning Accelerator and store matrices of an Artificial Neural Network; a host interface connectable to a host system; and a controller to store the measurements generated by the sensor into the random access memory as an input to the Artificial Neural Network. After the Deep Learning Accelerator generates in the random access memory an output of the Artificial Neural Network by executing the instructions to process the input, the controller may communicate the output to a host system through the host interface.
    Type: Grant
    Filed: June 19, 2020
    Date of Patent: February 7, 2023
    Assignee: Micron Technology, Inc.
    Inventor: Poorna Kale
  • Patent number: 11567895
    Abstract: In an embodiment, a host controller includes a clock control circuit to cause the host controller to communicate a clock signal on a clock line of an interconnect, the clock control circuit to receive an indication that a first device is to send information to the host controller and to dynamically release control of the clock line of the interconnect to enable the first device to drive a second clock signal onto the clock line of the interconnect for communication with the information. Other embodiments are described and claimed.
    Type: Grant
    Filed: June 3, 2021
    Date of Patent: January 31, 2023
    Assignee: Intel Corporation
    Inventors: Kenneth P. Foust, Amit Kumar Srivastava, Nobuyuki Suzuki
  • Patent number: 11567778
    Abstract: Techniques are disclosed for reordering operations of a neural network to improve runtime efficiency. In some examples, a compiler receives a description of the neural network comprising a plurality of operations. The compiler may determine which execution engine of a plurality of execution engines is to perform each of the plurality of operations. The compiler may determine an order of performance associated with the plurality of operations. The compiler may identify a runtime inefficiency based on the order of performance and a hardware usage for each of the plurality of operations. An operation may be reordered to reduce the runtime inefficiency. Instructions may be compiled based on the plurality of operations, which include the reordered operation.
    Type: Grant
    Filed: April 28, 2021
    Date of Patent: January 31, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Jeffrey T. Huynh, Drazen Borkovic, Jindrich Zejda, Randy Renfu Huang, Ron Diamant
  • Patent number: 11569976
    Abstract: One example includes an isochronous receiver system. The system includes a pulse receiver configured to receive an input data signal from a transmission line and to convert the input data signal to a pulse signal. The system also includes a converter system comprising a phase converter system. The phase converter system includes a plurality of pulse converters associated with a respective plurality of sampling windows across a period of an AC clock signal. At least two of the sampling windows overlap at any given phase of the AC clock signal, such that the converter system is configured to generate an output pulse signal that is phase-aligned with at least one of a plurality of sampling phases of the AC clock signal based on associating the pulse signal with at least two of the sampling windows.
    Type: Grant
    Filed: June 7, 2021
    Date of Patent: January 31, 2023
    Assignee: NORTHROP GRUMMAN SYSTEMS CORPORATION
    Inventors: Brian Lee Koehler, Corey Arthur Kegerreis, Haitao O. Dai, Quentin P. Herr
  • Patent number: 11556769
    Abstract: In some embodiments, a superconducting parametric amplification neural network (SPANN) includes neurons that operate in the analog domain, and a fanout network coupling the neurons that operates in the digital domain. Each neuron is provided one or more input currents having a resolution of several bits. The neuron weights the currents, sums the weighted currents with an optional bias or threshold current, then applies a nonlinear activation function to the result. The nonlinear function is implemented using a quantum flux parametron (QFP), thereby simultaneously amplifying and digitizing the output current signal. The digitized output of some or all neurons in each layer is provided to the next layer using a fanout network that operates to preserve the digital information held in the current.
    Type: Grant
    Filed: April 28, 2020
    Date of Patent: January 17, 2023
    Assignee: Massachusetts Institute of Technology
    Inventor: Alexander Wynn
  • Patent number: 11556381
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for distributing machine learning workloads, e.g., computations for training a neural network or computing an inference using a neural network, across multiple hardware accelerators.
    Type: Grant
    Filed: May 6, 2022
    Date of Patent: January 17, 2023
    Assignee: Google LLC
    Inventors: Jeffrey Adgate Dean, Sudip Roy, Michael Acheson Isard, Aakanksha Chowdhery, Brennan Saeta, Chandramohan Amyangot Thekkath, Daniel William Hurt, Hyeontaek Lim, Laurent El Shafey, Parker Edward Schuh, Paul Ronald Barham, Ruoming Pang, Ryan Sepassi, Sanjay Ghemawat, Yonghui Wu
  • Patent number: 11544062
    Abstract: An apparatus and method for pairing store operations. For example, one embodiment of a processor comprises: a grouping eligibility checker to evaluate a plurality of store instructions based on a set of grouping rules to determine whether two or more of the plurality of store instructions are eligible for grouping; and a dispatcher to simultaneously dispatch a first group of store instructions of the plurality of store instructions determined to be eligible for grouping by the grouping eligibility checker.
    Type: Grant
    Filed: March 28, 2020
    Date of Patent: January 3, 2023
    Assignee: Intel Corporation
    Inventors: Raanan Sade, Igor Yanover, Stanislav Shwartsman, Muhammad Taher, David Zysman, Liron Zur, Yiftach Gilad
  • Patent number: 11537927
    Abstract: A method of mitigating quantum readout errors by stochastic matrix inversion includes performing a plurality of quantum measurements on a plurality of qubits having predetermined plurality of states to obtain a plurality of measurement outputs; selecting a model for a matrix linking the predetermined plurality of states to the plurality of measurement outputs, the model having a plurality of model parameters, wherein a number of the plurality of model parameters grows less than exponentially with a number of the plurality of qubits; training the model parameters to minimize a loss function that compares predictions of the model with the matrix; computing an inverse of the model based on the trained model parameters; and providing the computed inverse of the model to a noise prone quantum readout of the plurality of qubits to obtain a substantially noise free quantum readout.
    Type: Grant
    Filed: February 14, 2020
    Date of Patent: December 27, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Sergey Bravyi, Jay M. Gambetta, David C. Mckay, Sarah E. Sheldon
  • Patent number: 11537856
    Abstract: The present invention relates to the digital circuits for evaluating neural engineering framework style neural networks. The digital circuits for evaluating neural engineering framework style neural networks comprised of at least one on-chip memory, a plurality of non-linear components, an external system, a first spatially parallel matrix multiplication, a second spatially parallel matrix multiplication, an error signal, plurality of set of factorized network weight, and an input signal. The plurality of sets of factorized network weights further comprise a first set factorized network weights and a second set of factorized network weights. The first spatially parallel matrix multiplication combines the input signal with the first set of factorized network weights called the encoder weight matrix to produce an encoded value. The non-linear components are hardware simulated neurons which accept said encoded value to produce a distributed neural activity.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: December 27, 2022
    Assignee: APPLIED BRAIN RESEARCH INC.
    Inventors: Benjamin Jacob Morcos, Christopher David Eliasmith, Nachiket Ganesh Kapre
  • Patent number: 11531631
    Abstract: The number of QoS group execution IO with respect to a logical device is retained in both of an inter-controller shared memory and an in-controller shared memory. Then, a processor of a storage device compares the number of in-controller execution IO with an update interval threshold value set by the following expression. Update Interval Threshold Value=(“QoS Group IOUpper Limit Value”?(“Number of QoS Group Execution IO” in In-Controller Shared Memory+“Number of In-Controller Execution IO”))דMargin Ratio Coefficient”÷“Number of Controllers in System” Then, the processor adds the number of in-controller execution IO to the number of QoS group execution IO of the inter-controller shared memory when the number of in-controller execution IO is greater than or equal to the update interval threshold value, and performs rewriting in the inter-controller shared memory.
    Type: Grant
    Filed: March 12, 2021
    Date of Patent: December 20, 2022
    Assignee: HITACHI, LTD.
    Inventor: Chenqi Zhu