Patents Examined by Cheng-Yuan Tseng
  • Patent number: 11966358
    Abstract: A processing device comprises a first set of processors comprising a first processor and a second processor, each of which comprises at least one controllable port, a first memory operably coupled to the first set of processors, at least one forward data line configured for one-way transmission of data in a forward direction between the first set of processors, and at least one backward data line configured for one-way transmission of data in a backward direction between the first set of processors. wherein the first set of processors are operably coupled in series via the at least one forward data line and the at least one backward data line.
    Type: Grant
    Filed: August 9, 2023
    Date of Patent: April 23, 2024
    Assignee: Rebellions Inc.
    Inventors: Wongyu Shin, Juyeong Yoon, Sangeun Je
  • Patent number: 11966633
    Abstract: An NVM algorithm generator that evaluates a Liberty file characterizing an NVM module and a memory view of the NVM module that identifies ports and associated operations of the NVM module to generate a control algorithm. The control algorithm includes a read algorithm that includes an order of operations for assigning values to ports of the NVM module to assert a read condition of a strobe port, executing a memory read on the NVM module and setting values to the ports on the NVM module to assert a complement of a program condition. The control algorithm also includes a program algorithm that includes an order of operations for assigning values to ports of the NVM module to assert the program condition of the strobe port, executing a memory write and setting values to the ports on the NVM module to assert the complement of the program condition.
    Type: Grant
    Filed: July 13, 2022
    Date of Patent: April 23, 2024
    Assignee: Cadence Design Systems, Inc.
    Inventors: Steven L. Gregor, Puneet Arora
  • Patent number: 11966343
    Abstract: A storage device is disclosed. The storage device may include a storage for a data and a controller to process an input/output (I/O) request from a host processor on the data in the storage. A computational storage unit may implement at least one service for execution on the data in the storage. A command router may route a command received from the host processor to the controller or the computational storage unit based at least in part on the command.
    Type: Grant
    Filed: September 22, 2021
    Date of Patent: April 23, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Ramzi Ammari, Changho Choi
  • Patent number: 11960971
    Abstract: A method of mitigating quantum readout errors by stochastic matrix inversion includes performing a plurality of quantum measurements on a plurality of qubits having predetermined plurality of states to obtain a plurality of measurement outputs; selecting a model for a matrix linking the predetermined plurality of states to the plurality of measurement outputs, the model having a plurality of model parameters, wherein a number of the plurality of model parameters grows less than exponentially with a number of the plurality of qubits; training the model parameters to minimize a loss function that compares predictions of the model with the matrix; computing an inverse of the model based on the trained model parameters; and providing the computed inverse of the model to a noise prone quantum readout of the plurality of qubits to obtain a substantially noise free quantum readout.
    Type: Grant
    Filed: November 18, 2022
    Date of Patent: April 16, 2024
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Sergey Bravyi, Jay M. Gambetta, David C. Mckay, Sarah E. Sheldon
  • Patent number: 11960436
    Abstract: A method of synchronizing system state data is provided. The method includes executing a first processor based on initial state data during an update cycle, wherein the initial state data represents a state of the system prior to initiation of the update cycle, detecting changes in state of the system by the first processor using sensors, the changes in state being added to a record of modified state data until a predefined progress position within the update cycle, designating the modified state data as next state data, based on reaching the predefined progress position within the update cycle, and transitioning from execution of the first processor based on the initial state data to execution of the first processor based on the next state data, based on completion of the update cycle.
    Type: Grant
    Filed: April 29, 2022
    Date of Patent: April 16, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Nadav Shlomo Ben-Amram, Netanel Hadad, Liran Biber
  • Patent number: 11960417
    Abstract: Described are techniques including a method comprising detecting a deallocated Input/Output (I/O) queue associated with a first entity in a Non-Volatile Memory Express (NVMe) storage system. The method further comprises broadcasting an Asynchronous Event Request (AER) message indicating I/O queue availability based on the deallocated I/O queue. The method further comprises allocating, in response to the AER message, a new I/O queue to a second entity in the NVMe storage system.
    Type: Grant
    Filed: January 17, 2023
    Date of Patent: April 16, 2024
    Assignee: International Business Machines Corporation
    Inventors: Kushal S. Patel, Sarvesh S. Patel, Subhojit Roy
  • Patent number: 11960986
    Abstract: A neural network accelerator includes an operator that calculates a first operation result based on a first tiled input feature map and first tiled filter data, a quantizer that generates a quantization result by quantizing the first operation result based on a second bit width extended compared with a first bit width of the first tiled input feature map, a compressor that generates a partial sum by compressing the quantization result, and a decompressor that generates a second operation result by decompressing the partial sum, the operator calculates a third operation result based on a second tiled input feature map, second tiled filter data, and the second operation result, and an output feature map is generated based on the third operation result.
    Type: Grant
    Filed: September 14, 2022
    Date of Patent: April 16, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Seokhyeong Kang, Yesung Kang, Sunghoon Kim, Yoonho Park
  • Patent number: 11954562
    Abstract: In a general aspect, a quantum computing method is described. In some aspects, a control system in a quantum computing system assigns subsets of qubit devices in a quantum processor to respective cores. The control system identifies boundary qubit devices residing between the cores in the quantum processor and generates control sequences for each respective core. A signal delivery system in communication with the control system and the quantum processor receives control signals to execute the control sequences, and the control signals are applied to the respective cores in the quantum processor.
    Type: Grant
    Filed: December 13, 2021
    Date of Patent: April 9, 2024
    Assignee: Rigetti & Co, LLC
    Inventors: Matthew J. Reagor, William J. Zeng, Michael Justin Gerchick Scheer, Benjamin Jacob Bloom, Nikolas Anton Tezak, Nicolas Didier, Christopher Butler Osborn, Chad Tyler Rigetti
  • Patent number: 11954494
    Abstract: A system for generating a cluster combination instruction set using machine learning, the system comprising a computing device configured to generate, as a function of a received cluster, a plurality of physical transfer paths from a distinct plurality of initiation points to a single locale, wherein the cluster comprises a cluster of a plurality of alimentary elements, determine, as a function of the plurality of physical transfer paths, a physical transfer pattern, generate an objective function of the plurality of physical transfer paths as a function of a plurality of constraints, select a physical transfer path that minimizes objective function, determine a cluster combination instruction set for the physical transfer pattern to the single destination, and generate a representation of the cluster combination instruction set via a graphical user interface to at least a physical transfer apparatus and the plurality of alimentary element originators.
    Type: Grant
    Filed: February 3, 2022
    Date of Patent: April 9, 2024
    Assignee: KPN INNOVATIONS, LLC.
    Inventor: Kenneth Neumann
  • Patent number: 11947477
    Abstract: A system includes a display subsystem. The display subsystem includes a shared buffer having allocated portions, each allocated to one of a plurality of display threads, each display thread associated with a display peripheral. The display subsystem also includes a direct memory access (DMA) engine configured to receive a request from a main processor to deallocate an amount of space from a first allocated portion associated with a first display thread. In response to receiving the request, the DMA engine deallocates the amount of space from the first allocated portion and shifts the allocated portions of at least some of other display threads to maintain contiguity of the allocated portions and concatenate free space at an end of the shared buffer.
    Type: Grant
    Filed: August 15, 2022
    Date of Patent: April 2, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Anish Reghunath, Brian Chae, Jay Scott Salinger, Chunheng Luo
  • Patent number: 11940934
    Abstract: An accelerator is disclosed. A circuit may process a data to produce a processed data. A first tier storage may include a first capacity and a first latency. A second tier storage may include a second capacity and a second latency. The second capacity may be larger than the first capacity, and the second latency may be slower than the first latency. A bus may be used to transfer at least one of the data or the processed data between the first tier storage and the second tier storage.
    Type: Grant
    Filed: January 27, 2022
    Date of Patent: March 26, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Marie Mai Nguyen, Rekha Pitchumani, Zongwang Li, Yang Seok Ki, Krishna Teja Malladi
  • Patent number: 11941405
    Abstract: An example device includes a plurality of computational memory banks. Each computational memory bank of the plurality of computational memory banks includes an array of memory units and a plurality of processing elements connected to the array of memory units. The device further includes a plurality of single instruction, multiple data (SIMD) controllers. Each SIMD controller of the plurality of SIMD controllers is contained within at least one computational memory bank of the plurality of computational memory banks. Each SIMD controller is to provide instructions to the at least one computational memory bank.
    Type: Grant
    Filed: July 27, 2023
    Date of Patent: March 26, 2024
    Assignee: UNTETHER AI CORPORATION
    Inventors: William Martin Snelgrove, Darrick John Wiebe
  • Patent number: 11929940
    Abstract: A circuit and corresponding method perform resource arbitration. The circuit comprises a pending arbiter (PA) that outputs a PA selection for accessing a resource. The PA is selection based on PA input. The PA input represents respective pending-state of requesters of the resource. The circuit further comprises a valid arbiter (VA) that outputs a VA selection for accessing the resource. The VA selection is based on VA input. The VA input represents respective valid-state of the requesters. The circuit performs a validity check on the PA selection output. The circuit outputs a final selection for accessing the resource by selecting, based on the validity check performed, the PA selection output or VA selection output. The circuit addresses arbitration fairness issues that may result when multiple requesters are arbitrating to be selected for access to a shared resource and such requesters require a credit (token) to be eligible for arbitration.
    Type: Grant
    Filed: September 14, 2022
    Date of Patent: March 12, 2024
    Assignee: Marvell Asia Pte Ltd
    Inventors: Joseph Featherston, Aadeetya Shreedhar
  • Patent number: 11915742
    Abstract: A wafer-on-wafer formed memory and logic device can enable high bandwidth transmission of data directly between a memory die and a logic die. A logic die that is bonded to a memory die via a wafer-on-wafer bonding process can receive signals indicative of a genetic sequence from the memory die and through a wafer-on-wafer bond. The logic die can also perform a genome annotation lotic operation to attach biological information to the genetic sequence. An annotated genetic sequence can be provided as an output.
    Type: Grant
    Filed: August 10, 2022
    Date of Patent: February 27, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Sean S. Eilert, Kunal R. Parekh, Aliasger T. Zaidy, Glen E. Hush
  • Patent number: 11915147
    Abstract: Techniques that facilitate model support in deep learning are provided. In one example, a system includes a graphics processing unit and a central processing unit memory. The graphics processing unit processes data to train a deep neural network. The central processing unit memory stores a portion of the data to train the deep neural network. The graphics processing unit provides, during a forward pass process of the deep neural network that traverses through a set of layers for the deep neural network from a first layer of the set of layers to a last layer of the set of layers that provides a set of outputs for the deep neural network, input data for a layer from the set of layers for the deep neural network to the central processing unit memory.
    Type: Grant
    Filed: October 20, 2022
    Date of Patent: February 27, 2024
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Minsik Cho, Ulrich Alfons Finkler, Vladimir Zolotov, David S. Kung
  • Patent number: 11902706
    Abstract: A method for transmitting high bandwidth camera data through a SerDes links is provided. The method includes steps of: calculating transmission bandwidth required for transmitting image data, and the image data is obtained by a high bandwidth camera; determining a maximum bandwidth capacity of each SerDes link of a plurality of SerDes links; cutting the image data into a plurality of sub images according to the transmission bandwidth and the maximum bandwidth capacity of each SerDes link; assigning each sub image to a sub image transmission area in a corresponding SerDes link, and each SerDes link containing the sub image transmission area and the sub image reception area; acquiring a plurality of sub images transmitted in the plurality of the SerDes links from the corresponding sub image reception area; and splicing the plurality of sub images into the image data.
    Type: Grant
    Filed: March 30, 2022
    Date of Patent: February 13, 2024
    Assignee: SHENZHEN ANTU AUTONMOUS DRIVING TECHNOLOGIES LTD.
    Inventor: Jianxiong Xiao
  • Patent number: 11900122
    Abstract: Methods and parallel processing units for avoiding inter-pipeline data hazards identified at compile time. For each identified inter-pipeline data hazard the primary instruction and secondary instruction(s) thereof are identified as such and are linked by a counter which is used to track that inter-pipeline data hazard. When a primary instruction is output by the instruction decoder for execution the value of the counter associated therewith is adjusted to indicate that there is hazard related to the primary instruction, and when primary instruction has been resolved by one of multiple parallel processing pipelines the value of the counter associated therewith is adjusted to indicate that the hazard related to the primary instruction has been resolved.
    Type: Grant
    Filed: July 10, 2023
    Date of Patent: February 13, 2024
    Assignee: Imagination Technologies Limited
    Inventors: Luca Iuliano, Simon Nield, Yoong-Chert Foo, Ollie Mower
  • Patent number: 11893476
    Abstract: According to an embodiment, an inference system includes a recurrent neural network circuit, an inference neural network, and a control circuit. The recurrent neural network circuit receives M input signals and outputs N intermediate signals, where M is an integer of 2 or more and N is an integer of 2 or more. The inference neural network circuit receives the N intermediate signals and outputs L output signals, where L is an integer of 2 or more. The control circuit adjusts a plurality of coefficients that are set to the recurrent neural network circuit and adjusts a plurality of coefficients that are set to the inference neural network circuit. The control circuit adjusts the coefficients set to the recurrent neural network circuit according to a total delay time period from timing for applying the M input signals until timing for firing the L output signals.
    Type: Grant
    Filed: November 2, 2022
    Date of Patent: February 6, 2024
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Takao Marukame, Kumiko Nomura, Yoshifumi Nishi, Koichi Mizushima
  • Patent number: 11886379
    Abstract: A computing system includes a quantum processor with qubits, a classical memory including a quantum program defining a plurality of instructions in a source language, and a classical processor configured to: (i) receive a circuit of gates representing a quantum program for a variational algorithm in which computation is interleaved with compilation; (ii) identify a plurality of blocks, each block includes a subcircuit of gates, leaving one or more remainder subcircuits of the circuit of gates outside of the plurality of blocks; (iii) pre-compile each block of the plurality of blocks with a pulse generation program to generate a plurality of pre-compiled blocks including control pulses configured to perform the associated block on the quantum processor; and (iv) iteratively execute the quantum program using the pre-compiled blocks as static during runtime and recompiling the one or more remainder subcircuits on the classical processor at each iteration of execution.
    Type: Grant
    Filed: September 9, 2020
    Date of Patent: January 30, 2024
    Assignee: THE UNIVERSITY OF CHICAGO
    Inventors: Pranav Gokhale, Yongshan Ding, Thomas Propson, Frederic T. Chong
  • Patent number: 11871146
    Abstract: A video processor is configured to perform the following steps: receiving a series of input frames; calculating a buffer stage value according to the series of input frames, wherein the buffer stage value corresponds to a status of the input frames stored in a frame buffer of the video processor; and selecting a frame set from the input frames stored in the frame buffer for generating an interpolated frame as an output frame to be output by the video processor according to the buffer stage value.
    Type: Grant
    Filed: August 16, 2022
    Date of Patent: January 9, 2024
    Assignee: NOVATEK Microelectronics Corp.
    Inventors: Chih Chang, I-Feng Lin, Hsiao-En Chang