Patents Examined by Hyun Nam
  • Patent number: 12038855
    Abstract: A memory system with adaptive refresh commands is disclosed. In one aspect, a memory system or device that has multiple banks within a channel may receive a per bank command that indicates a first bank to be refreshed and provides additional information about a second bank to be refreshed. In a further exemplary aspect, a quad bank refresh command may be sent that indicates a first bank to be refreshed and provides additional information about second through fourth banks to be refreshed. In a further exemplary aspect, an octa bank refresh command may be sent that indicates a first bank to be refreshed and provides additional information about second through eighth banks to be refreshed. The three new refresh commands allow adjacent or spaced banks to be refreshed.
    Type: Grant
    Filed: February 9, 2022
    Date of Patent: July 16, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Pankaj Deshmukh, Shyamkumar Thoziyoor, Vishakh Balakuntalam Visweswara, Jungwon Suh, Subbarao Palacharla
  • Patent number: 12020035
    Abstract: This specification describes a programmatic multicast technique enabling one thread (for example, in a cooperative group array (CGA) on a GPU) to request data on behalf of one or more other threads (for example, executing on respective processor cores of the GPU). The multicast is supported by tracking circuitry that interfaces between multicast requests received from processor cores and the available memory. The multicast is designed to reduce cache (for example, layer 2 cache) bandwidth utilization enabling strong scaling and smaller tile sizes.
    Type: Grant
    Filed: March 10, 2022
    Date of Patent: June 25, 2024
    Assignee: NVIDIA CORPORATION
    Inventors: Apoorv Parle, Ronny Krashinsky, John Edmondson, Jack Choquette, Shirish Gadre, Steve Heinrich, Manan Patel, Prakash Bangalore Prabhakar, Jr., Ravi Manyam, Wish Gandhi, Lacky Shah, Alexander L. Minkin
  • Patent number: 12019572
    Abstract: A bridging module, a data transmission system, and a data transmission method are provided. The bridging module obtains a first read request, and allocates a first location storage space for first return data corresponding to the first read request. The bridging module combines a first master transaction identifier and an address of the first location storage space as a first slave transaction identifier of the first read request, and sends the first read request to a slave device. The bridging module obtains a second read request, and allocates a second location storage space for second return data corresponding to the second read request. The bridging module combines a second master transaction identifier and an address of the second location storage space as a second slave transaction identifier of the second read request, and sends the second read request to the slave device.
    Type: Grant
    Filed: October 30, 2022
    Date of Patent: June 25, 2024
    Assignee: Shanghai Zhaoxin Semiconductor Co., Ltd.
    Inventors: Jingyang Wang, Guangyun Wang, Zhiqiang Hui
  • Patent number: 12019567
    Abstract: A method includes enabling a manufacturing mode at least partially based on a first signal provided via one of a number of reserved pins of an interface connector. The method can further include providing, in response to enabling the manufacturing mode, a second signal to a memory component coupled to the interface connector via a number of other pins of the interface connector.
    Type: Grant
    Filed: November 12, 2021
    Date of Patent: June 25, 2024
    Assignee: Micron Technology, Inc.
    Inventor: Adam J. Hieb
  • Patent number: 12019902
    Abstract: Various embodiments comprise systems and methods to monitor operations of a data pipeline. In some examples, a data pipeline receives data inputs, processes the data inputs, and responsively generates and transfers data outputs. Data monitoring circuitry monitors the operations of the data pipeline circuitry, identifies an input change between an initial one of the data inputs and a subsequent one of the data inputs, and identifies an output change between an initial one of the data outputs and a subsequent one of the data outputs. The data monitoring circuitry correlates the input change to the output change, determines a quality threshold for the output change based on the correlation, and determines when the output change falls below the quality threshold. When the output change falls below the quality threshold, the data monitoring circuitry generates and transfers an alert that indicates the input change and the output change.
    Type: Grant
    Filed: July 27, 2022
    Date of Patent: June 25, 2024
    Assignee: Data Culpa, Inc.
    Inventor: J. Mitchell Haile
  • Patent number: 12013801
    Abstract: The present disclosure provides a Peripheral Component Interconnect Express (PCIe) controller for a PCIe endpoint device. The PCIe controller includes: a PCIe link interface configured to receive an interrupt request message, wherein the interrupt request message is a message write transaction PCIe transport layer packet (TLP) including an address associated with the PCIe endpoint device and a data value including interrupt information; an interrupt request trigger register configured to receive the data value; a plurality of interrupt lines; and a decode logic circuit connected to the interrupt request trigger register and the plurality of interrupt lines, the decode logic circuit configured to automatically decode a plurality of data bits of the data value when received in the interrupt request trigger register and generate an interrupt signal and provide the interrupt signal on one of the plurality of interrupt lines to an interrupt handling circuit.
    Type: Grant
    Filed: March 23, 2022
    Date of Patent: June 18, 2024
    Assignee: Infineon Technologies AG
    Inventors: Lin Li, Uli Kretzschmar
  • Patent number: 12014179
    Abstract: In some aspects, data collection functions are interposed to generate input data for an observability pipeline system. In some aspects, a data collection function is made available to an application running on a computer system, with the data collection function having the same name as an original function referenced by the application. In response to a call to the original function, the data collection function is executed and data is extracted from the application. The original function is then executed. A reporting thread of the application is executed; executing the reporting thread generates observability pipeline input data by formatting the extracted data and sends the observability pipeline input data from the computer system to an observability pipeline system.
    Type: Grant
    Filed: January 31, 2022
    Date of Patent: June 18, 2024
    Assignee: Cribl, Inc.
    Inventors: Donn Rochette, John Chelikowsky, Ledion Bitincka, Clint Sharp
  • Patent number: 12001384
    Abstract: A processing element array includes N processing elements (PE) arranged linearly, N?2, and an operating method of the PE array includes: performing a first data transmission procedure, where an initial value of I is 1 and the first data transmission procedure includes: operating, by an ith PE, according to a first datum stored in itself, and sending the first datum to other PEs for their operations, adding 1 to I when I<N, and performing the first data transmission procedure again, performing a second data transmission procedure when I is equal to N, which includes: operating, by the Jth PE, according to a second datum stored in itself, and sending the second datum to other PEs for their operations, reducing J by 1 when J>1 and the (J?1)th PE has the second datum, and performing the second data transmission procedure again.
    Type: Grant
    Filed: November 17, 2022
    Date of Patent: June 4, 2024
    Assignees: INVENTEC (PUDONG) TECHNOLOGY CORPORATION, INVENTEC CORPORATION
    Inventors: Yu-Sheng Lin, Trista Pei-Chun Chen, Wei-Chao Chen
  • Patent number: 11989554
    Abstract: Techniques are disclosed for reducing or eliminating loop overhead caused by function calls in processors that form part of a pipeline architecture. The processors in the pipeline process data blocks in an iterative fashion, with each processor in the pipeline completing one of several iterations associated with a processing loop for a commonly-executed function. The described techniques leverage the use of message passing for pipelined processors to enable an upstream processor to signal to a downstream processor when processing has been completed, and thus a data block is ready for further processing in accordance with the next loop processing iteration. The described techniques facilitate a zero loop overhead architecture, enable continuous data block processing, and allow the processing pipeline to function indefinitely within the main body of the processing loop associated with the commonly-executed function where efficiency is greatest.
    Type: Grant
    Filed: December 23, 2020
    Date of Patent: May 21, 2024
    Assignee: Intel Corporation
    Inventors: Kameran Azadet, Jeroen Leijten, Joseph Williams
  • Patent number: 11983129
    Abstract: A Baseboard Management Controller (BMC) that may configure itself is disclosed. The BMC may include an access logic to determine a configuration of a chassis that includes the BMC. The BMC may also include a built-in self-configuration logic to configure the BMC responsive to the configuration of the chassis. The BMC may self-configure without using any BIOS, device drivers, or operating systems.
    Type: Grant
    Filed: July 14, 2021
    Date of Patent: May 14, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Sompong Paul Olarig, Son T. Pham
  • Patent number: 11983576
    Abstract: A method, computer program product, and system include a processor(s) issues an instruction that includes processing core information that includes locations of processing cores of the computing system (logical cores and/or physical cores), and an operator selection. The processor(s) sets security parameters for information returned by the instruction which is topological information for mapping of the logical cores to the physical cores. The processor(s) obtains the topological information and utilizes an operating system to map the logical cores to the physical cores.
    Type: Grant
    Filed: August 4, 2021
    Date of Patent: May 14, 2024
    Inventors: Omkar Ulhas Javeri, Peter Dana Driever, Brian Keith Wade, Seth E. Lederer
  • Patent number: 11977509
    Abstract: A representative reconfigurable processing circuit and a reconfigurable arithmetic circuit are disclosed, each of which may include input reordering queues; a multiplier shifter and combiner network coupled to the input reordering queues; an accumulator circuit; and a control logic circuit, along with a processor and various interconnection networks. A representative reconfigurable arithmetic circuit has a plurality of operating modes, such as floating point and integer arithmetic modes, logical manipulation modes, Boolean logic, shift, rotate, conditional operations, and format conversion, and is configurable for a wide variety of multiplication modes. Dedicated routing connecting multiplier adder trees allows multiple reconfigurable arithmetic circuits to be reconfigurably combined, in pair or quad configurations, for larger adders, complex multiplies and general sum of products use, for example.
    Type: Grant
    Filed: October 17, 2022
    Date of Patent: May 7, 2024
    Assignee: Cornami, Inc.
    Inventors: Paul L. Master, Steven K. Knapp, Raymond J. Andraka, Alexei Beliaev, Martin A. Franz, Rene Meessen, Frederick Curtis Furtek
  • Patent number: 11971836
    Abstract: The present application relates to a network-on-chip data processing method. The method is applied to a network-on-chip processing system, the network-on-chip processing system is used for executing machine learning calculation, and the network-on-chip processing system comprises a storage device and a calculation device. The method comprises: accessing the storage device in the network-on-chip processing system by means of a first calculation device in the network-on-chip processing system and obtaining first operation data; performing an operation on the first operation data by means of the first calculation device to obtain a first operation result; and sending the first operation result to a second calculation device in the network-on-chip processing system. According to the method, operation overhead can be reduced and data read/write efficiency can be improved.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: April 30, 2024
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Yao Zhang, Shaoli Liu, Jun Liang, Yu Chen
  • Patent number: 11971830
    Abstract: An example method may include determining whether a preemption flag associated with a first input/output (I/O) handling thread is equal to a first value indicating that preemption of the first I/O queue handling thread is forthcoming, wherein the first I/O queue handling thread is executing on a first processor, the first I/O queue handling thread is associated with a first set of one or more queue identifiers, and each queue identifier identifies a queue being handled by the first I/O queue handling thread, and, responsive to determining that the preemption flag is equal to the first value, transferring the first set of one or more queue identifiers to a second I/O queue handling thread executing on a second processor. Transferring the first set of queue identifiers may include removing the one or more queue identifiers from the first set.
    Type: Grant
    Filed: March 22, 2022
    Date of Patent: April 30, 2024
    Assignee: Red Hat, Inc.
    Inventor: Michael Tsirkin
  • Patent number: 11966351
    Abstract: A network interface device comprises a streaming data processing path comprising a first data processing engine and hubs. A first scheduler associated with a first hub controls an output of data by the first hub to the first data processing engine and a second scheduler associated with a second hub controls an output of data by the second hub. The first hub is arranged upstream of the first data processing engine on the data processing path and is configured to receive data from a first upstream data path entity and from a first data processing entity implemented in programmable circuitry via a data ingress interface of the first hub. The first data processing engine is configured to receive data from the first hub, process the received data and output the processed data to the second hub arranged downstream of first data processing engine.
    Type: Grant
    Filed: March 11, 2021
    Date of Patent: April 23, 2024
    Assignee: XILINX, INC.
    Inventors: Steven Leslie Pope, Derek Edward Roberts, Dmitri Kitariev, Neil Duncan Turton, David James Riddoch, Ripduman Sohan
  • Patent number: 11960895
    Abstract: A method and a control device for returning of command response information, and an electronic device are provided. The method includes: receiving response information for a command request, the response information carrying a status identification and a level identification of the command request; storing the response information in a corresponding level of a data queue in accordance with the level identification, where the data queue includes multiple levels, and each level of the data queue is used to store one or more pieces of response information; scanning all levels of the data queue, and determining, a level in which all parts of response information are collected, as a candidate level; determining a first piece of response information in accordance with a status identification of the response information stored in the candidate level; and outputting the first piece of response information.
    Type: Grant
    Filed: May 14, 2021
    Date of Patent: April 16, 2024
    Assignees: Haining ESWIN IC Design Co., Ltd., Beijing ESWIN Computing Technology Co., Ltd.
    Inventor: Zhe Chen
  • Patent number: 11960431
    Abstract: The present application relates to a network-on-chip data processing method. The method is applied to a network-on-chip processing system, the network-on-chip processing system is used for executing machine learning calculation, and the network-on-chip processing system comprises a storage device and a calculation device. The method comprises: accessing the storage device in the network-on-chip processing system by means of a first calculation device in the network-on-chip processing system, and obtaining first operation data; performing an operation on the first operation data by means of the first calculation device to obtain a first operation result; and sending the first operation result to a second calculation device in the network-on-chip processing system. According to the method, operation overhead can be reduced and data read/write efficiency can be improved.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: April 16, 2024
    Assignee: GUANGZHOU UNIVERSITY
    Inventors: Shaoli Liu, Zhen Li, Yao Zhang
  • Patent number: 11954490
    Abstract: Disclosed embodiments relate to systems and methods for performing instructions to transform matrices into a row-interleaved format. In one example, a processor includes fetch and decode circuitry to fetch and decode an instruction having fields to specify an opcode and locations of source and destination matrices, wherein the opcode indicates that the processor is to transform the specified source matrix into the specified destination matrix having the row-interleaved format; and execution circuitry to respond to the decoded instruction by transforming the specified source matrix into the specified RowInt-formatted destination matrix by interleaving J elements of each J-element sub-column of the specified source matrix in either row-major or column-major order into a K-wide submatrix of the specified destination matrix, the K-wide submatrix having K columns and enough rows to hold the J elements.
    Type: Grant
    Filed: April 28, 2023
    Date of Patent: April 9, 2024
    Assignee: Intel Corporation
    Inventors: Raanan Sade, Robert Valentine, Bret Toll, Christopher J. Hughes, Alexander F. Heinecke, Elmoustapha Ould-Ahmed-Vall, Mark J. Charney
  • Patent number: 11934829
    Abstract: In various examples, a VPU and associated components may be optimized to improve VPU performance and throughput. For example, the VPU may include a min/max collector, automatic store predication functionality, a SIMD data path organization that allows for inter-lane sharing, a transposed load/store with stride parameter functionality, a load with permute and zero insertion functionality, hardware, logic, and memory layout functionality to allow for two point and two by two point lookups, and per memory bank load caching capabilities. In addition, decoupled accelerators may be used to offload VPU processing tasks to increase throughput and performance, and a hardware sequencer may be included in a DMA system to reduce programming complexity of the VPU and the DMA system. The DMA and VPU may execute a VPU configuration mode that allows the VPU and DMA to operate without a processing controller for performing dynamic region based data movement operations.
    Type: Grant
    Filed: December 9, 2022
    Date of Patent: March 19, 2024
    Assignee: NVIDIA Corporation
    Inventors: Ahmad Itani, Yen-Te Shih, Jagadeesh Sankaran, Ravi P Singh, Ching-Yu Hung
  • Patent number: 11928473
    Abstract: An instruction scheduling method and an instruction scheduling system for a reconfigurable array processor. The method includes: determining whether a fan-out of a vertex in a data flow graph (DFG) is less than an actual interconnection number of a processing unit in a reconfigurable array; establishing a corresponding relationship between the vertex and a correlation operator of the processing unit; introducing a register to a directed edge, acquiring a retiming value of each vertex; arranging instructions in such a manner that retiming values of the instruction vertexes are in ascending order, and acquiring transmission time and scheduling order of the instructions; folding the DFG, placing an instruction to an instruction vertex; inserting a register and acquiring a current DFG; and acquiring a common maximum subset of the current DFG and the reconfigurable array by a maximum clique algorithm, and distributing the instructions.
    Type: Grant
    Filed: March 22, 2022
    Date of Patent: March 12, 2024
    Assignee: BEIJING TSINGMICRO INTELLIGENT TECHNOLOGY CO., LTD.
    Inventors: Kejia Zhu, Zhen Zhang, Peng Ouyang