Patents Examined by Cheng-Yuan Tseng
  • Patent number: 11452208
    Abstract: Example implementations relate to an electronic device packaged on a wing board. For example, an implementation includes a base board having a planar signal interface to couple parallelly to a signal interface segment of a system board. The example implementation also includes a plurality of wing boards to scale in a direction perpendicular to a plane of the base board. An electronic device is packaged on each of the wing boards. A flexible circuit flexibly links at least one of the wing boards to the base board and has a signal path to communicatively couple the planar signal interface and an electronic device packaged the wing board.
    Type: Grant
    Filed: February 24, 2017
    Date of Patent: September 20, 2022
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Kevin Leigh, John Norton, George D. Megason
  • Patent number: 11449440
    Abstract: An apparatus includes at least one processing device, with the at least one processing device comprising a processor and a memory coupled to the processor. The at least one processing device is configured to generate a data copy offload command to offload a data copy operation from a host device to a storage system, the command comprising a multi-protocol indicator that specifies that data is to be copied from a source logical storage device that utilizes a first access protocol to a destination logical storage device that utilizes a second access protocol different than the first access protocol, and to send the data copy offload command from the host device to the storage system over a network for performance of the offloaded data copy operation in the storage system in accordance with the command. The first and second access protocols illustratively comprise respective SCSI and NVMe access protocols.
    Type: Grant
    Filed: January 19, 2021
    Date of Patent: September 20, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Amit Pundalik Anchi, Rimpesh Patel
  • Patent number: 11449737
    Abstract: A model calculation unit for calculating a multilayer perceptron model, the model calculation unit being designed in hardware and being hardwired, including: a process or core; a memory; a DMA unit, which is designed to successively instruct the processor core to calculate a neuron layer, in each case based on input variables of an assigned input variable vector and to store the respectively resulting output variables of an output variable vector in an assigned data memory section, the data memory section for the input variable vector assigned to at least one of the neuron layers at least partially including in each case the data memory sections of at least two of the output variable vectors of two different neuron layers.
    Type: Grant
    Filed: September 4, 2017
    Date of Patent: September 20, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Andre Guntoro, Heiner Markert, Martin Schiegg
  • Patent number: 11442727
    Abstract: An electronic device includes a processor, a branch predictor in the processor, and a predictor controller in the processor. The branch predictor includes multiple prediction functional blocks, each prediction functional block configured for generating predictions for control transfer instructions (CTIs) in program code based on respective prediction information, the branch predictor configured to select, from among predictions generated by the prediction functional blocks for each CTI, a selected prediction to be used for that CTI. The predictor controller keeps a record of prediction functional blocks from which the branch predictor previously selected predictions for CTIs. The predictor controller uses information from the record for controlling which prediction functional blocks are used by the branch predictor for generating predictions for CTIs.
    Type: Grant
    Filed: June 8, 2020
    Date of Patent: September 13, 2022
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Varun Agrawal, John Kalamatianos
  • Patent number: 11436171
    Abstract: A system includes a display subsystem. The display subsystem includes a shared buffer having allocated portions, each allocated to one of a plurality of display threads, each display thread associated with a display peripheral. The display subsystem also includes a direct memory access (DMA) engine configured to receive a request from a main processor to deallocate an amount of space from a first allocated portion associated with a first display thread. In response to receiving the request, the DMA engine deallocates the amount of space from the first allocated portion and shifts the allocated portions of at least some of other display threads to maintain contiguity of the allocated portions and concatenate free space at an end of the shared buffer.
    Type: Grant
    Filed: December 5, 2019
    Date of Patent: September 6, 2022
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Anish Reghunath, Brian Chae, Jay Scott Salinger, Chunheng Luo
  • Patent number: 11436016
    Abstract: A technique for determining whether a register value should be written to an operand cache or whether the register value should remain in and not be evicted from the operand cache is provided. The technique includes executing an instruction that accesses an operand that comprises the register value, performing one or both of a lookahead technique and a prediction technique to determine whether the register value should be written to an operand cache or whether the register value should remain in and not be evicted from the operand cache, and based on the determining, updating the operand cache.
    Type: Grant
    Filed: December 4, 2019
    Date of Patent: September 6, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Anthony T. Gutierrez, Bradford M. Beckmann, Marcus Nathaniel Chow
  • Patent number: 11436486
    Abstract: Systems, apparatuses, and methods for optimizing neural network training with a first-in, last-out (FILO) buffer are disclosed. A processor executes a training run of a neural network implementation by performing multiple passes and adjusting weights of the neural network layers on each pass. Each training phase includes a forward pass and a backward pass. During the forward pass, each layer, in order from first layer to last layer, stores its weights in the FILO buffer. An error is calculated for the neural network at the end of the forward pass. Then, during the backward pass, each layer, in order from last layer to first layer, retrieves the corresponding weights from the FILO buffer. Gradients are calculated based on the error so as to update the weights of the layer for the next pass through the neural network.
    Type: Grant
    Filed: August 19, 2019
    Date of Patent: September 6, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Greg Sadowski
  • Patent number: 11423290
    Abstract: A semiconductor device includes an operation control signal generation circuit and a neural network circuit. The operation control signal generation circuit generates an arithmetic signal and a core read signal based on a command. The neural network circuit outputs first core data and second core data from a core region based on the core read signal, a cell block selection signal, and a cell selection signal. The neural network circuit also performs an arithmetic operation of the first and second core data based on the arithmetic signal to generate arithmetic result data.
    Type: Grant
    Filed: December 13, 2019
    Date of Patent: August 23, 2022
    Assignee: SK hynix Inc.
    Inventor: Choung Ki Song
  • Patent number: 11422812
    Abstract: Systems, apparatuses, and methods for implementing as part of a processor pipeline a reprogrammable execution unit capable of executing specialized instructions are disclosed. A processor includes one or more reprogrammable execution units which can be programmed to execute different types of customized instructions. When the processor loads a program for execution, the processor loads a bitfile associated with the program. The processor programs a reprogrammable execution unit with the bitfile so that the reprogrammable execution unit is capable of executing specialized instructions associated with the program. During execution, a dispatch unit dispatches the specialized instructions to the reprogrammable execution unit for execution.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: August 23, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Andrew G. Kegel
  • Patent number: 11423292
    Abstract: A convolutional neural-network calculating apparatus including a bidirectional-output operation module and a data scheduler is provided. The bidirectional-output operation module includes a number of bidirectional-output operators, a number of row-output accumulators, and a number of column-output accumulators. Each bidirectional-output operator has a row-output port and a column-output port. The row-output accumulators are coupled to the row-output ports, and the column-output accumulators are coupled to the corresponding column-output ports. The data scheduler is configured to provide a number of values of an input data and a number of convolution values of the convolution kernels to the bidirectional-output operators. In a first operation mode, the bidirectional-output operators output operation results to the corresponding column-output accumulators through the column-output ports.
    Type: Grant
    Filed: April 28, 2020
    Date of Patent: August 23, 2022
    Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Kuo-Chiang Chang, Shien-Chun Luo
  • Patent number: 11410017
    Abstract: Embodiments of the invention provide a neural network comprising multiple functional neural core circuits, and a dynamically reconfigurable switch interconnect between the functional neural core circuits. The interconnect comprises multiple connectivity neural core circuits. Each functional neural core circuit comprises a first and a second core module. Each core module comprises a plurality of electronic neurons, a plurality of incoming electronic axons, and multiple electronic synapses interconnecting the incoming axons to the neurons. Each neuron has a corresponding outgoing electronic axon. In one embodiment, zero or more sets of connectivity neural core circuits interconnect outgoing axons in a functional neural core circuit to incoming axons in the same functional neural core circuit.
    Type: Grant
    Filed: August 16, 2019
    Date of Patent: August 9, 2022
    Assignee: International Business Machines Corporation
    Inventor: Dharmendra S. Modha
  • Patent number: 11409540
    Abstract: A device architecture includes a spatially reconfigurable array of processors, such as configurable units of a CGRA, having spare elements, and a parameter store on the device which stores parameters that tag one or more elements as unusable. Technologies are described which change the pattern of placement of configuration data, in dependence on the tagged elements. As a result, a spatially reconfigurable array having unusable elements can be repaired.
    Type: Grant
    Filed: July 16, 2021
    Date of Patent: August 9, 2022
    Assignee: SambaNova Systems, Inc.
    Inventors: Gregory F. Grohoski, Manish K. Shah, Kin Hing Leung
  • Patent number: 11409673
    Abstract: Examples include a method of managing storage for triggered operations. The method includes receiving a request to allocate a triggered operation; if there is a free triggered operation, allocating the free triggered operation; if there is no free triggered operation, recovering one or more fired triggered operations, freeing one or more of the recovered triggered operations, and allocating one of the freed triggered operations; configuring the allocated triggered operation; and storing the configured triggered operation in a cache on an input/output (I/O) device for subsequent asynchronous execution of the configured triggered operation.
    Type: Grant
    Filed: February 14, 2019
    Date of Patent: August 9, 2022
    Assignee: Intel Corporation
    Inventors: Andrew Friedley, Sayantan Sur, Ravindra Babu Ganapathi, Travis Hamilton, Keith D. Underwood
  • Patent number: 11405051
    Abstract: An exemplary artificial intelligence/machine learning hardware computing environment having an exemplary DNN module cooperating with one or more memory components can perform data sharing and distribution as well reuse of a buffer data to reduce the number of memory component read/writes thereby enhancing overall hardware performance and reducing power consumption. Illustratively, data from a cooperating memory component is read according to a selected operation of the exemplary hardware and written to corresponding other memory component for use by one or more processing elements (e.g., neurons). The data is read in such a manner to optimize the engagement of the one or more processing elements for each processing cycle as well as to reuse data previously stored in the one or more cooperating memory components. Operatively, the written data is copied to a shadow memory buffer prior to being consumed by the processing elements.
    Type: Grant
    Filed: April 13, 2018
    Date of Patent: August 2, 2022
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Chad Balling McBride, Amol Ashok Ambardekar, Kent D. Cedola, Boris Bobrov, George Petre, Larry Marvin Wall
  • Patent number: 11403104
    Abstract: The embodiments of the disclosure provide a neural network processor, a chip and an electronic device. The neural network processor includes a convolution processing unit, a vector processing unit, and an instruction issue module. The convolution processing unit and the vector processing unit are both connected to the instruction issue module. The instruction issue module is configured to issue a plurality of instructions to the convolution processing unit and the vector processing unit in parallel. The embodiments of the application can improve the efficiency of the neural network processor processing data.
    Type: Grant
    Filed: December 5, 2020
    Date of Patent: August 2, 2022
    Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventor: Shengguang Yuan
  • Patent number: 11397579
    Abstract: The present disclosure provides a computation device. The computation device is configured to perform a machine learning computation, and includes an operation unit, a controller unit, and a conversion unit. The storage unit is configured to obtain input data and a computation instruction. The controller unit is configured to extract and parse the computation instruction from the storage unit to obtain one or more operation instructions, and to send the one or more operation instructions and the input data to the operation unit. The operation unit is configured to perform operations on the input data according to one or more operation instructions to obtain a computation result of the computation instruction. In the examples of the present disclosure, the input data involved in machine learning computations is represented by fixed-point data, thereby improving the processing speed and efficiency of training operations.
    Type: Grant
    Filed: December 16, 2019
    Date of Patent: July 26, 2022
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD
    Inventors: Yao Zhang, Bingrui Wang
  • Patent number: 11385792
    Abstract: In one implementation, a system resource is added to a storage system, for a resource-preserving upgrade. An upgrade component is coupled to the storage system as a temporary storage system shelf. Storage drives are moved from the storage system to the upgrade component. One or more storage controllers of the upgrade component are promoted to take over data services from the storage system.
    Type: Grant
    Filed: March 12, 2020
    Date of Patent: July 12, 2022
    Assignee: Pure Storage, Inc.
    Inventors: Anthony Niven, Andrew R. Bernat, Eric Kelly Blanchard, Ashish Karkare, Peter E. Kirkpatrick
  • Patent number: 11379390
    Abstract: In-line data packet transformations. A transformation engine obtains data to be transformed and determines a transformation to be applied to the data. The determining uses an input/output control block that includes at least one field to be used in determining the transformation to be applied. Based on determining the transformation to be applied, the transformation is performed.
    Type: Grant
    Filed: December 14, 2020
    Date of Patent: July 5, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael James Becht, Christopher J. Colonna, Stephen Robert Guendert, Pasquale A. Catalano, Edward W. Chencinski
  • Patent number: 11366774
    Abstract: A method of controlling a read request can include: receiving, in a host device, the read request from a bus master, where the host device is coupled to a memory device by an interface; determining a configuration state of the read request; comparing an attribute of the read request against a predetermined attribute stored in the host device; adjusting the configuration state of the read request when the attribute of the read request matches the predetermined attribute; and sending the read request with the adjusted configuration state from the host device to the memory device via the interface.
    Type: Grant
    Filed: September 24, 2020
    Date of Patent: June 21, 2022
    Assignee: Adesto Technologies Corporation
    Inventors: Gideon Intrater, Bard Pedersen
  • Patent number: 11366691
    Abstract: A method of scheduling instructions within a parallel processing unit is described. The method comprises decoding, in an instruction decoder, an instruction in a scheduled task in an active state, and checking, by an instruction controller, if an ALU targeted by the decoded instruction is a primary instruction pipeline. If the targeted ALU is a primary instruction pipeline, a list associated with the primary instruction pipeline is checked to determine whether the scheduled task is already included in the list. If the scheduled task is already included in the list, the decoded instruction is sent to the primary instruction pipeline.
    Type: Grant
    Filed: December 1, 2020
    Date of Patent: June 21, 2022
    Assignee: Imagination Technologies Limited
    Inventors: Simon Nield, Yoong-Chert Foo, Adam de Grasse, Luca Iuliano