Patents Examined by Steven G Snyder
-
Patent number: 12130757Abstract: A memory module includes a substrate, plural memory devices, and a buffer. The plural memory devices are organized into at least one rank, each memory device having plural banks. The buffer includes a primary interface for communicating with a memory controller and a secondary interface coupled to the plural memory devices. For each bank of each rank of memory devices, the buffer includes data buffer circuitry and address buffer circuitry. The data buffer circuitry includes first storage to store write data transferred during a bank cycle interval (tRR). The address buffer circuitry includes second storage to store address information corresponding to the data stored in the first storage.Type: GrantFiled: September 30, 2022Date of Patent: October 29, 2024Assignee: Rambus Inc.Inventors: Frederick A. Ware, Craig E. Hampel
-
Patent number: 12124816Abstract: A carry-lookahead adder is provided. A first mask unit performs first mask operation on first input data with the first mask value to obtain first masked data. A second mask unit performs second mask operation on second input data with the second mask value to obtain second masked data. A first XOR gate receives the first and second mask values to provide a variable value. A half adder receives the first and second masked data to generate a propagation value and an intermediate generation value. A third mask unit performs third mask operation on the propagation value with the third mask value to obtain the third masked data. A carry-lookahead generator provides the carry output and the carry value according to carry input, the generation value, and the propagation value. The second XOR gate receives the third masked data and the carry value to provide the sum output.Type: GrantFiled: December 28, 2022Date of Patent: October 22, 2024Assignee: NUVOTON TECHNOLOGY CORPORATIONInventors: Kun-Yi Wu, Yu-Shan Li
-
Patent number: 12124853Abstract: A data loading and storage system includes a storage module, a buffering module, a control module, a plurality of data loading modules, a plurality of data storage modules and a multi-core processor array module. The data is continuously stored in a DDR, and the data computed by the multi-core processor may be arranged continuously or be arranged according to a certain rule. After DMA reads the data into the DATA_BUF module by a BURST mode, in order to support fast loading of the data into the multi-core processor array, the data loading modules (i.e., load modules) are designed. In order to quickly store the computed result of the multi-core processor array into the (DATA_BUF module according to a certain rule, the data storage modules (i.e., store module) are designed.Type: GrantFiled: September 24, 2021Date of Patent: October 22, 2024Assignee: BEIJING TSINGMICRO INTELLIGENT TECHNOLOGY CO., LTD.Inventors: Pengpeng Zhang, Peng Ouyang
-
Patent number: 12127366Abstract: A fan management system includes a chassis housing a storage fan system, a storage system cooled by the storage fan system, computing fan subsystems, and computing devices cooled by respective ones of the computing fan subsystems. Each of the computing devices detects a multi-computing-device configuration that includes the computing devices and, in response, determines a computing device chassis location for that computing device. Each computing device then receives fan inventory information that describes the storage fan system and the computing fan subsystems, distinguishes between the storage fan system and the computing fan subsystems based on the fan inventory information, identifies the computing fan subsystem that is configured to cool that computing device based on the computing device chassis location for that computing device, manages the computing fan subsystem that is configured to cool that computing device, and ignores the others of the computing fan subsystems.Type: GrantFiled: June 15, 2021Date of Patent: October 22, 2024Assignee: Dell Products L.P.Inventors: Shivabasava Karibasappa Komaranalli, Chandrasekhar Mugunda, Rui An
-
Patent number: 12118356Abstract: A multi-threading processor is provided, which includes a cache including a memory and a controller, and a core electrically connected to the cache and configured to simultaneously execute and manage a plurality of threads, in which the core is configured to determine an occurrence of a data hazard for the plurality of threads and stall operations of the plurality of threads, receive, from the cache, hint information instructing a first thread of the plurality of threads to operate, and initiate an operation of the first thread based on the hint information while the data hazard for the plurality of threads is maintained.Type: GrantFiled: April 23, 2024Date of Patent: October 15, 2024Assignee: MetisX CO., Ltd.Inventors: Kwang Sun Lee, Do Hun Kim, Kee Bum Shin
-
Patent number: 12118360Abstract: A microprocessor that includes a prediction unit (PRU) comprising a branch target buffer (BTB). Each BTB entry is associated with a fetch block (FBlk) (sequential set of instructions starting at a fetch address (FA)) having a length (no longer than a predetermined maximum length) and termination type. The termination type is from a list comprising: a sequential termination type indicating that a FA of a next FBlk in program order is sequential to a last instruction of the FBlk, and one or more non-sequential termination types. The PRU uses the FA of a current FBlk to generate a current BTB lookup value, looks up the current BTB lookup value, and in response to a miss, predicts the current FBlk has the predetermined maximum length and sequential termination type. An instruction fetch unit uses the current FA and predicted predetermined maximum length to fetch the current FBlk from an instruction cache.Type: GrantFiled: January 5, 2023Date of Patent: October 15, 2024Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Michael N. Michael
-
Patent number: 12112163Abstract: A memory interface circuit includes an instruction decoder configured to receive an instruction from a processor to generate a corresponding control code. An execution circuit is configured to receive the control code from the instruction decoder and access a memory and generate an arithmetic result according to the control code.Type: GrantFiled: April 21, 2022Date of Patent: October 8, 2024Assignee: TAIWAN SEMICONDUCTOR MANUFACTURING COMPANY, LTD.Inventors: Hiroki Noguchi, Yih Wang
-
Patent number: 12111938Abstract: The described technology is generally directed towards secure collaborative processing of private inputs. A secure execution engine can process encrypted data contributed by multiple parties, without revealing the encrypted data to any of the parties. The encrypted data can be processed according to any program written in a high-level programming language, while the secure execution engine handles cryptographic processing.Type: GrantFiled: April 11, 2022Date of Patent: October 8, 2024Assignee: CipherMode Labs, Inc.Inventors: Mohammad Sadegh Riazi, Ilya Razenshteyn
-
Patent number: 12111788Abstract: A central processing unit which achieves increased processing speed is provided. In a CPU constituted of a RISC architecture, a program counter which indicates an address in an instruction memory and a general-purpose register which is designated as an operand in an instruction to be decoded by an instruction decoder are constituted of asynchronous storage elements.Type: GrantFiled: February 6, 2020Date of Patent: October 8, 2024Assignee: UNO Laboratories, Ltd.Inventors: Hideki Ishihara, Masami Fukushima, Koichi Kitagishi, Seijin Nakayama
-
Patent number: 12112204Abstract: A system comprising an accelerator circuit comprising an accelerator function unit to implement a first function, and one or more device feature header (DFH) circuits to provide attributes associated with the accelerator function unit, and a processor to retrieve the attributes of the accelerator function unit by traversing a device feature list (DFL) referencing the one or more DFH circuits, execute, based on the attributes, an application encoding the first function to cause the accelerator function unit to perform the first function.Type: GrantFiled: August 9, 2022Date of Patent: October 8, 2024Assignee: Intel CorporationInventors: Pratik M. Marolia, Aaron J. Grier, Henry M. Mitchel, Joseph Grecco, Michael C. Adler, Utkarsh Y. Kakaiya, Joshua D. Fender, Sundar Nadathur, Nagabhushan Chitlur
-
Patent number: 12112166Abstract: The present disclosure provides a data processing method and an apparatus and a related product for increased efficiency of tensor processing. The products include a control module including an instruction caching unit, an instruction processing unit, and a storage queue unit. The instruction caching unit is configured to store computation instructions associated with an artificial neural network operation; the instruction processing unit is configured to parse the computation instructions to obtain a plurality of operation instructions; and the storage queue unit is configured to store an instruction queue, where the instruction queue includes a plurality of operation instructions or computation instructions to be executed in the sequence of the queue. By adopting the above-mentioned method, the present disclosure can improve the operation efficiency of related products when performing operations of a neural network model.Type: GrantFiled: September 18, 2023Date of Patent: October 8, 2024Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Shaoli Liu, Bingrui Wang, Jun Liang
-
Patent number: 12105625Abstract: A programmable address generator has an iteration variable generator for generation of an ordered set of iteration variables, which are re-ordered by an iteration variable selection fabric, which delivers the re-ordered iteration variables to one or more address generators. A configurator receives an instruction containing fields which provide configuration constants to the address generator, iteration variable selection fabric, and address generators. After configuration, the address generators provide addresses coupled to a memory. In one example of the invention, the address generators generate an input address, a coefficient address, and an output address for performing convolutional neural network inferences.Type: GrantFiled: January 29, 2022Date of Patent: October 1, 2024Assignee: Ceremorphic, Inc.Inventors: Lizy Kurian John, Venkat Mattela, Heonchul Park
-
Patent number: 12086600Abstract: Embodiments of the present disclosure include techniques for branch prediction. A branch predictor may be included in a front end of a processor. The branch predictor may store branch targets in a branch target buffer. The branch target buffer includes shared bits, which may be combined with branch target bits to specify branch target destination addresses. Shared bits may result in more efficient memory usage in the processor, for example.Type: GrantFiled: December 5, 2022Date of Patent: September 10, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Somasundaram Arunachalam, Daren Eugene Streett, Richard William Doing
-
Patent number: 12079144Abstract: An apparatus includes a communication bus circuit, a memory circuit, a queue manager circuit, and an arbitration circuit. The communication bus circuit includes a command bus and a data bus separate from the command bus. The queue manager circuit may be configured to receive a first memory request and a second memory request, each request including a respective address value to be sent via the command bus. The first memory request may include a corresponding data operand to be sent via the data bus. The queue manager circuit may also be configured to distribute the first memory request and the second memory request among a plurality of bus queues. Distribution of the first and second memory requests may be based on the respective address values. The arbitration circuit may be configured to select a particular memory request from a particular one of the plurality of bus queues.Type: GrantFiled: November 10, 2022Date of Patent: September 3, 2024Assignee: Apple Inc.Inventors: Sebastian Werner, Amir Kleen, Jeonghee Shin, Peter A. Lisherness
-
Patent number: 12079632Abstract: Sequence partition based schedule optimization is performed by generating a sequence and a schedule based on the sequence, dividing the sequence into a plurality of sequence partitions based on the schedule and the data dependency graph, each sequence partition including a portion of the plurality of instructions and a portion of the plurality of buffers, performing, for each sequence partition, a plurality of partition optimizing iterations, and merging the plurality of sequence partitions to produce a merged schedule.Type: GrantFiled: December 16, 2022Date of Patent: September 3, 2024Assignee: EDGECORTIX INC.Inventors: Jens Huthmann, Sakyasingha Dasgupta, Nikolay Nez
-
Patent number: 12072799Abstract: A programmable address generator has an iteration variable generator for generation of an ordered set of iteration variables, which are re-ordered by an iteration variable selection fabric, which delivers the re-ordered iteration variables to one or more address generators. A configurator receives an instruction containing fields which provide configuration constants to the address generator, iteration variable selection fabric, and address generators. After configuration, the address generators provide addresses coupled to a memory. In one example of the invention, the address generators generate an input address, a coefficient address, and an output address for performing convolutional neural network inferences.Type: GrantFiled: March 14, 2023Date of Patent: August 27, 2024Assignee: Ceremorphic, Inc.Inventors: Lizy Kurian John, Venkat Mattela, Heonchul Park
-
Patent number: 12072824Abstract: This invention is a bus communication protocol. A master device stores bus credits. The master device may transmit a bus transaction only if it holds sufficient number and type of bus credits. Upon transmission, the master device decrements the number of stored bus credits. The bus credits correspond to resources on a slave device for receiving bus transactions. The slave device must receive the bus transaction if accompanied by the proper credits. The slave device services the transaction. The slave device then transmits a credit return. The master device adds the corresponding number and types of credits to the stored amount. The slave device is ready to accept another bus transaction and the master device is re-enabled to initiate the bus transaction. In many types of interactions a bus agent may act as both master and slave depending upon the state of the process.Type: GrantFiled: September 24, 2020Date of Patent: August 27, 2024Assignee: Texas Instruments IncorporatedInventors: David M. Thompson, Timothy D. Anderson, Joseph R. M. Zbiciak, Abhijeet A Chachad, Kai Chirca, Matthew D. Pierson
-
Patent number: 12067396Abstract: Techniques related to executing instructions by a processor comprising receiving a first instruction for execution, determining a first latency value based on an expected amount of time needed for the first instruction to be executed, storing the first latency value in a writeback queue, beginning execution of the first instruction on the instruction execution pipeline, adjusting the latency value based on an amount of time passed since beginning execution of the first instruction, outputting a first result of the first instruction based on the latency value, receiving a second instruction, determining that the second instruction is a variable latency instruction, storing a ready value indicating that a second result of the second instruction is not ready in the writeback queue, beginning execution of the second instruction on the instruction execution pipeline, updating the ready value to indicate that the second result is ready, and outputting the second result.Type: GrantFiled: December 21, 2021Date of Patent: August 20, 2024Assignee: Texas Instruments IncorporatedInventor: Timothy D. Anderson
-
Patent number: 12061908Abstract: A streaming engine employed in a digital data processor specifies fixed first and second read only data streams. Corresponding stream address generator produces address of data elements of the two streams. Corresponding steam head registers stores data elements next to be supplied to functional units for use as operands. The two streams share two memory ports. A toggling preference of stream to port ensures fair allocation. The arbiters permit one stream to borrow the other's interface when the other interface is idle. Thus one stream may issue two memory requests, one from each memory port, if the other stream is idle. This spreads the bandwidth demand for each stream across both interfaces, ensuring neither interface becomes a bottleneck.Type: GrantFiled: September 13, 2021Date of Patent: August 13, 2024Assignee: Texas Instruments IncorporatedInventors: Joseph Zbiciak, Timothy Anderson
-
Patent number: 12014206Abstract: A method includes receiving, by a first stage in a pipeline, a first transaction from a previous stage in pipeline; in response to first transaction comprising a high priority transaction, processing high priority transaction by sending high priority transaction to a buffer; receiving a second transaction from previous stage; in response to second transaction comprising a low priority transaction, processing low priority transaction by monitoring a full signal from buffer while sending low priority transaction to buffer; in response to full signal asserted and no high priority transaction being available from previous stage, pausing processing of low priority transaction; in response to full signal asserted and a high priority transaction being available from previous stage, stopping processing of low priority transaction and processing high priority transaction; and in response to full signal being de-asserted, processing low priority transaction by sending low priority transaction to buffer.Type: GrantFiled: October 3, 2022Date of Patent: June 18, 2024Assignee: Texas Instruments IncorporatedInventors: Abhijeet Ashok Chachad, David Matthew Thompson