Patents Examined by Courtney P Spann
  • Patent number: 12265827
    Abstract: In a very long instruction word (VLIW) central processing unit instructions are grouped into execute packets that execute in parallel. A constant may be specified or extended by bits in a constant extension instruction in the same execute packet. If an instruction includes an indication of constant extension, the decoder employs bits of a constant extension instruction to extend the constant of an immediate field. Two or more constant extension slots are permitted in each execute packet, each extending constants for a different predetermined subset of functional unit instructions. In an alternative embodiment, more than one functional unit may have constants extended from the same constant extension instruction employing the same extended bits. A long extended constant may be formed using the extension bits of two constant extension instructions.
    Type: Grant
    Filed: June 12, 2023
    Date of Patent: April 1, 2025
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Timothy David Anderson, Duc Quang Bui, Joseph Raymond Michael Zbiciak
  • Patent number: 12260220
    Abstract: Accelerating fetch target queue (FTQ) processing is disclosed herein. In some aspects, a processor comprises an FTQ and an FTQ acceleration cache (FAC), and is configured to generate a FAC entry corresponding to an FTQ entry of a plurality of FTQ entries of the FTQ, wherein the FTQ entry comprises a fetch address bundle comprising a plurality of sequential virtual addresses (VAs), and the FAC entry comprises metadata for the FTQ entry. The processor is further configured to receive, using the FTQ, a request to access the FTQ entry. The processor is also configured to, responsive to receiving the request to access the FTQ entry, locate, using the FAC, the FAC entry corresponding to the FTQ entry among a plurality of FAC entries of the FAC. The processor is additionally configured to perform accelerated processing of the request to access the FTQ entry using the metadata of the FAC entry.
    Type: Grant
    Filed: December 16, 2022
    Date of Patent: March 25, 2025
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Saransh Jain, Rami Mohammad Al Sheikh, Daren Eugene Streett, Michael Scott McIlvaine, Somasundaram Arunachalam
  • Patent number: 12248784
    Abstract: An electronic circuit (4000) includes a bias value generator circuit (3900) operable to supply a varying bias value in a programmable range, and an instruction circuit (3625, 4010) responsive to a first instruction to program the range of the bias value generator circuit (3900) and further responsive to a second instruction having an operand to repeatedly issue the second instruction with the operand varied in an operand value range determined as a function of the varying bias value.
    Type: Grant
    Filed: February 18, 2020
    Date of Patent: March 11, 2025
    Assignee: Texas Instruments Incorporated
    Inventors: Kenichi Tashiro, Hiroyuki Mizuno, Yuji Umemoto
  • Patent number: 12236244
    Abstract: A multi-degree branch predictor is disclosed. A processing circuit includes an instruction fetch circuit configured to fetch branch instructions, and a branch prediction circuit having a plurality of prediction subcircuits. The prediction subcircuits are configured to store different amounts of branch history data with respect to other ones, and to receive an indication of a given branch instruction in a particular clock cycle. The prediction subcircuits implement a common branch prediction scheme to output, in different clock cycles, corresponding predictions for the given branch instruction using the different amounts of branch history data and cause, instruction fetches to be performed by the instruction fetch circuit.
    Type: Grant
    Filed: June 30, 2022
    Date of Patent: February 25, 2025
    Assignee: Apple Inc.
    Inventors: Wei-Han Lien, Muawya M. Al-Otoom, Ian D. Kountanis, Niket K. Choudhary, Pruthivi Vuyyuru
  • Patent number: 12223324
    Abstract: A data processing system includes a vector data processing unit that includes a shared scheduler queue configured to store in a same queue, at least one entry that includes at least a mask type instruction and another entry that includes at least a vector type instruction. Shared pipeline control logic controls a vector data path or a mask data path, based a type of instruction picked from the same queue. In some examples, at least one mask type instruction and the at least one vector type instruction each include a source operand having a corresponding shared source register bit field that indexes into both a mask register file and a vector register file. The shared pipeline control logic uses a mask register file or a vector register file depending on whether bits of the shared source register bit field identify a mask source register or a vector source register.
    Type: Grant
    Filed: September 30, 2022
    Date of Patent: February 11, 2025
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Michael Estlick, Eric Dixon, Theodore Carlson, Erik D. Swanson
  • Patent number: 12216610
    Abstract: A microprocessor system comprises a computational array and a hardware data formatter. The computational array includes a plurality of computation units that each operates on a corresponding value addressed from memory. The values operated by the computation units are synchronously provided together to the computational array as a group of values to be processed in parallel. The hardware data formatter is configured to gather the group of values, wherein the group of values includes a first subset of values located consecutively in memory and a second subset of values located consecutively in memory. The first subset of values is not required to be located consecutively in the memory from the second subset of values.
    Type: Grant
    Filed: June 15, 2023
    Date of Patent: February 4, 2025
    Assignee: Tesla, Inc.
    Inventors: Emil Talpes, William McGee, Peter Joseph Bannon
  • Patent number: 12210879
    Abstract: Implementations are directed to methods, systems, and computer-readable media for data hazard generation for instruction sequence generation. In one aspect, a computer-implemented method includes: obtaining data hazard information defining a data hazard to be generated during computer instruction generation, the data hazard specifying a data dependency between a first instruction and a second instruction occurring after the first instruction, and generating, based on the data hazard information and register usage data of a plurality of registers, an instruction for execution in a current processing cycle that satisfies the data dependency specified by the data hazard. The register usage data specifies, for each register of the plurality of registers, whether data was read from or written into the register in a plurality of processing cycles preceding the current processing cycle.
    Type: Grant
    Filed: December 2, 2022
    Date of Patent: January 28, 2025
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Jingliang Wang, Michael Brothers
  • Patent number: 12197378
    Abstract: An apparatus configured for offloading system service tasks to a processing-in-memory (“PIM”) device includes an agent configured to: receive, from a host processor, a request to offload a memory task associated with a system service to the PIM device; determine at least one PIM command and at least one memory page associated with the host processor based upon the request; and issue the at least one PIM command to the PIM device for execution by the PIM device to perform the memory task upon the at least one memory page.
    Type: Grant
    Filed: June 1, 2022
    Date of Patent: January 14, 2025
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Jagadish B. Kotra, Kishore Punniyamurthy
  • Patent number: 12175250
    Abstract: The embodiments of the present application provide a processing unit. The processing unit comprises: an instruction fetching unit configured for fusing instruction of vector configuration instruction and vector operation instruction that are adjacent in order to obtain fusion instruction; an instruction decoding unit configured to decode the fusion instruction to obtain first execution information and second execution information; a vector configuration unit configured to execute the vector configuration instruction according to the first execution information, modify vector control register, and bypass the value of the modified vector control register to the vector operation unit; the vector operation unit configured to execute the vector operation instruction according to the second execution information and the value of the modified vector control register.
    Type: Grant
    Filed: December 30, 2022
    Date of Patent: December 24, 2024
    Assignee: C-SKY MICROSYSTEMS CO., LTD.
    Inventors: Dongqi Liu, Haowen Chen, Zhao Jiang, Chang Liu, Dingyan Wei, Wenjian Xu, Tao Jiang
  • Patent number: 12112173
    Abstract: An embodiment of an integrated circuit may comprise a branch predictor to predict whether a conditional branch is taken for one or more instructions, the branch predictor including circuitry to identify a loop branch instruction in the one or more instructions, and provide a branch prediction for the loop branch instruction based on a context of the loop branch instruction. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: October 8, 2024
    Assignee: Intel Corporation
    Inventors: Ke Sun, Rodrigo Branco, Kekai Hu
  • Patent number: 12106114
    Abstract: A processor includes a time counter and a time-resource matrix and statically dispatches baseline and extended instructions. The processor includes a plurality of register sets of a register file and a plurality of sets of functional units which are coupled by sets of dedicated read and write buses to allow parallel execution of baseline and extended instructions.
    Type: Grant
    Filed: April 29, 2022
    Date of Patent: October 1, 2024
    Assignee: Simplex Micro, Inc.
    Inventor: Thang Minh Tran
  • Patent number: 12086592
    Abstract: This application discloses a processor, a processing method, and a related device. The processor includes a processor core. The processor core includes an instruction dispatching unit and a graph flow unit and at least one general-purpose operation unit that are connected to the instruction dispatching unit. The instruction dispatching unit is configured to: allocate a general-purpose calculation instruction in a decoded to-be-executed instruction to the at least one general-purpose calculation unit, and allocate a graph calculation control instruction in the decoded to-be-executed instruction to the graph calculation unit, where the general-purpose calculation instruction is used to instruct to execute a general-purpose calculation task, and the graph calculation control instruction is used to instruct to execute a graph calculation task. The at least one general-purpose operation unit is configured to execute the general-purpose calculation instruction.
    Type: Grant
    Filed: November 29, 2022
    Date of Patent: September 10, 2024
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Xiping Zhou, Ruoyu Zhou, Fan Zhu, Wenbo Sun
  • Patent number: 12079628
    Abstract: An apparatus and method for loop flattening and reduction in a SIMD pipeline including broadcast, move, and reduction instructions.
    Type: Grant
    Filed: October 4, 2021
    Date of Patent: September 3, 2024
    Assignee: Intel Corporation
    Inventors: William M. Brown, Roland Schulz, Karthik Raman
  • Patent number: 12067399
    Abstract: A processor may include a bias prediction circuit and an instruction prediction circuit to provide respective predictions for a conditional instruction. The bias prediction circuit may provide a bias prediction whether a condition of the conditional instruction is biased true or biased false. The instruction prediction circuit may provide an instruction prediction whether the condition of the conditional instruction is true of false. Responsive to a bias prediction that the condition of the conditional instruction is biased true or biased false, the processor may use the bias prediction from the bias prediction circuit to speculatively process the conditional instruction. Otherwise, the processor may use the instruction prediction from the instruction prediction circuit to speculatively process the conditional instruction.
    Type: Grant
    Filed: February 1, 2022
    Date of Patent: August 20, 2024
    Assignee: Apple Inc.
    Inventors: Ian D Kountanis, Douglas C Holman, Wei-Han Lien, Pruthivi Vuyyuru, Ethan R Schuchman, Niket K Choudhary, Kulin N Kothari, Haoyan Jia
  • Patent number: 12056493
    Abstract: A processor and an operating method thereof for renaming a destination logical register of a move instruction are provided. The processor comprises a plurality of physical registers and a renaming circuit. The renaming circuit is coupled to the plurality of physical registers and is configured to receive an instruction sequence and check the instruction sequence. When a current instruction of the instruction sequence comprises the move instruction, the renaming circuit assigns a first physical register, which is assigned to a source logical register of the current instruction previously, to the destination logical register of the current instruction. The first physical register is one of the plurality of physical registers.
    Type: Grant
    Filed: October 31, 2021
    Date of Patent: August 6, 2024
    Assignee: Shanghai Zhaoxin Semiconductor Co., Ltd.
    Inventors: Chenchen Song, Yu Zhang, Mengchen Yang, Jianbin Wang
  • Patent number: 12045193
    Abstract: Embodiments of the present invention provide a method for incorporating a dynamic memory block and a configurable processor controller to enable computational processing and memory storage. The method includes storing data elements with each data element stored in a corresponding memory cell. The method also includes executing a computation operation when the storage device of the data elements is adjusted thereby triggering the computation operation. The method also includes transitioning the memory cells from the storage device to the computation device by adjusting the storage of data elements by the memory cells to execute the computation operation. The method also includes transitioning the memory cells from the computation device to the storage device by maintaining the storage of data elements by the memory cells in a static state thereby preventing storage of data elements by the memory cells from being adjusted.
    Type: Grant
    Filed: July 7, 2021
    Date of Patent: July 23, 2024
    Inventor: Atif Zafar
  • Patent number: 12038865
    Abstract: Embodiments of the present invention provide a method for incorporating a dynamic processing memory core into a single memory chip to enable computational processing and memory storage from the single memory chip. The method includes storing data elements by memory storage devices positioned on the single memory chip. The method also includes executing, by a processing devices positioned on the single memory chip, memory instructions. The method also includes transitioning the dynamic memory processing core from a memory storage device to a processing device by instructing the processing device to execute the memory instructions. The method also includes transitioning the dynamic processing memory core from the processing device to the memory storage device by instructing the processing device to not execute the memory instructions thereby terminating the computational processing of the dynamic processing memory core and maintaining the memory storage provided by the memory storage device.
    Type: Grant
    Filed: May 24, 2023
    Date of Patent: July 16, 2024
    Assignee: X-Silicon, Inc.
    Inventor: Atif Zafar
  • Patent number: 11989561
    Abstract: The disclosure provides a method and an apparatus for scheduling an out-of-order execution queue in an out-of-order processor. The method includes: constructing a sequence maintenance queue with a same number of items as the out-of-order execution queue, and allocating an empty item for instructions and data entering the out-of-order execution queue, in which the sequence maintenance queue comprises at least one identity (id) field; numbering each item of the out-of-order execution queue sequentially, and recording an id number of each item of the out-of-order execution queue in the id field of the sequence maintenance queue; enabling the instructions to enter an item of the out-of-order execution queue corresponding to an id number pointed by a tail of the sequence maintenance queue; and selecting instructions in ready items for execution from the out-of-order execution queue according to id number information indicated by the sequence maintenance queue.
    Type: Grant
    Filed: May 21, 2021
    Date of Patent: May 21, 2024
    Assignee: BEIJING VCORE TECHNOLOGY CO., LTD.
    Inventor: Dandan Huan
  • Patent number: 11977894
    Abstract: The disclosure provides a method for distributing instructions in a reconfigurable processor. The reconfigurable processor includes an instruction fetch module, an instruction sync control module and an instruction queue module. The method includes: configuring a format of a Memory Sync ID Table of each instruction type, obtaining a first memory identification field and a second memory identification field of each instruction, obtaining one-hot encodings of first and second memory identification fields, obtaining a sync table and executing each instruction of a plurality of to-be-run instructions.
    Type: Grant
    Filed: May 7, 2021
    Date of Patent: May 7, 2024
    Assignee: BEIJING TSINGMICRO INTELLIGENT TECHNOLOGY CO., LTD.
    Inventors: Baochuan Fei, Peng Ouyang, Shibin Tang, Liwei Deng