Patents Examined by Chun-Kuan Lee
  • Patent number: 11467998
    Abstract: Techniques for low-latency packet processing are disclosed. A network device receives a first set of write transactions including a first set of data segments corresponding to a first DMA descriptor from a host. The network device receives a second set of write transactions including a second set of data segments corresponding to a second DMA descriptor from the host. The network device detects that the first set of data segments have been written. In response to detecting that the first set of data segments have been written, and prior to completely writing the second set of data segments and to receiving a packet notifier from the host, the network device processes the first DMA descriptor.
    Type: Grant
    Filed: March 16, 2021
    Date of Patent: October 11, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Erez Izenberg, Said Bshara, Jonathan Cohen, Avigdor Segal
  • Patent number: 11455261
    Abstract: An embodiment of a semiconductor package apparatus may include technology to identify a partial set of populated memory channels from a full set of populated memory channels of a multi-channel memory system, and complete a first boot of an operating system with only the identified partial set of memory channels of the multi-channel memory system. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: September 27, 2022
    Assignee: Intel Corporation
    Inventors: Kevin Yufu Li, Donggui Yin, Zijian You, Shihui Li, Dujian Wu
  • Patent number: 11442877
    Abstract: An electrical circuit device includes a signal bus comprising a plurality of parallel signal paths and a calibration circuit, operatively coupled with the signal bus. The calibration circuit can perform operations including determining a representative duty cycle for a plurality of signals transferred via the plurality of parallel signal paths, the plurality of signals comprising a plurality of duty cycles and comparing the representative duty cycle for the plurality of signals transferred via the plurality of parallel signal paths to a reference value to determine a comparison result. The calibration circuit can perform further operations including adjusting, based on the comparison result, a trim value associated with the plurality of duty cycles of the plurality of signals to compensate for distortion in the plurality of duty cycles and calibrating the plurality of duty cycles of the plurality of signals using the adjusted trim value.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: September 13, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Guan Wang, Ali Feiz Zarrin Ghalam, Chin-Yu Chen, Jongin Kim
  • Patent number: 11436166
    Abstract: A processor comprises an execution unit operable to execute programs to perform processing operations, and one or more slave accelerators each operable to perform respective processing operations under the control of the execution unit. The execution unit includes a message generation circuit that generates messages to cause a slave accelerator to perform a processing operation. The message generation circuit fetches data values for including in a message or messages to be sent to a slave accelerator into local storage of the message generation circuit pending the inclusion of those data values in a message that is sent to a slave accelerator, and retrieves the data value or values from the local storage, and sends a message including the retrieved data value or values to the slave accelerator.
    Type: Grant
    Filed: February 5, 2019
    Date of Patent: September 6, 2022
    Assignee: Arm Limited
    Inventor: Emil Lambrache
  • Patent number: 11429555
    Abstract: In an embodiment, a coprocessor may include a bypass indication which identifies execution circuitry that is not used by a given processor instruction, and thus may be bypassed. The corresponding circuitry may be disabled during execution, preventing evaluation when the output of the circuitry will not be used for the instruction. In another embodiment, the coprocessor may implement a grid of processing elements in rows and columns, where a given coprocessor instruction may specify an operation that causes up to all of the processing elements to operate on vectors of input operands to produce results. Implementations of the coprocessor may implement a portion of the processing elements. The coprocessor control circuitry may be designed to operate with the full grid or partial grid, reissuing instructions in the partial grid case to perform the requested operation. In still another embodiment, the coprocessor may be able to fuse vector mode operations.
    Type: Grant
    Filed: February 26, 2019
    Date of Patent: August 30, 2022
    Assignee: Apple Inc.
    Inventors: Aditya Kesiraju, Andrew J. Beaumont-Smith, Boris S. Alvarez-Heredia, Srikanth Balasubramanian
  • Patent number: 11424952
    Abstract: The present invention relates to a data bus node integrated circuit comprising at least one static address selection terminal and a detecting circuit for detecting a state of the address selection terminal. The IC also comprises a communication circuit for data communication over a data bus. This circuit is adapted for determining a node address identifier taking the detected state of the at least one static address selection terminal into account. The detecting circuit is adapted for detecting the state of the address selection terminal by determining whether the address selection terminal is in a floating state, a power supply voltage state or a ground voltage state.
    Type: Grant
    Filed: March 4, 2016
    Date of Patent: August 23, 2022
    Assignee: MELEXIS TECHNOLOGIES NV
    Inventor: Peter Vandersteegen
  • Patent number: 11416256
    Abstract: A set of entries in a branch prediction structure for a set of second blocks are accessed based on a first address of a first block. The set of second blocks correspond to outcomes of one or more first branch instructions in the first block. Speculative prediction of outcomes of second branch instructions in the second blocks is initiated based on the entries in the branch prediction structure. State associated with the speculative prediction is selectively flushed based on types of the branch instructions. In some cases, the branch predictor can be accessed using an address of a previous block or a current block. State associated with the speculative prediction is selectively flushed from the ahead branch prediction, and prediction of outcomes of branch instructions in one of the second blocks is selectively initiated using non-ahead accessing, based on the types of the one or more branch instructions.
    Type: Grant
    Filed: July 31, 2020
    Date of Patent: August 16, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Marius Evers, Aparna Thyagarajan, Ashok T. Venkatachar
  • Patent number: 11416421
    Abstract: A context-based protection system uses tiered protection structures including master protection units, shared memory protection units, a peripheral protection units to provide security to bus transfer operations between central processing units (CPUs), memory array or portions of arrays, and peripherals.
    Type: Grant
    Filed: July 18, 2017
    Date of Patent: August 16, 2022
    Assignee: Cypress Semiconductor Corporation
    Inventors: Jan-Willem Van de Waerdt, Kai Dieffenbach, Uwe Moslehner, Jens Wagner, Mathias Sedner, Venkat Natarajan
  • Patent number: 11416438
    Abstract: A circuit device includes a first physical layer circuit to which a first bus is connected, a second physical layer circuit to which a second bus is connected, and a processing circuit that performs transfer processing in which a packet received from the first bus via the first physical layer circuit is transmitted to the second bus via the second physical layer circuit. The processing circuit includes a SYNC generation circuit that generates an m-bit SYNC, and when the packet is received from the first bus, the processing circuit outputs the m-bit SYNC to the second physical layer circuit.
    Type: Grant
    Filed: May 17, 2018
    Date of Patent: August 16, 2022
    Assignee: SEIKO EPSON CORPORATION
    Inventor: Chihiro Fukumoto
  • Patent number: 11416427
    Abstract: Information processing is disclosed. For instance, a first polling interval between a current polling operation and a previous polling operation is polled, the first polling interval indicating a time period from an end of the previous polling operation to a start of the current polling operation. An execution status of the current polling operation is obtained, the execution status indicating whether an object to be polled for the current polling operation is obtained. Further, based on the first polling interval and the execution status, a second polling interval is determined between the current polling operation and the next polling operation, the second polling interval indicating a time period from an end of the current polling operation to a start of the next polling operation. In this way, the solution can provide a stable and efficient adaptive polling.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: August 16, 2022
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Chaoqian Cai, Frank Yifan Huang
  • Patent number: 11409530
    Abstract: A system, apparatus and method for ordering a sequence of processing transactions. The method includes accessing, from a memory, a program sequence of operations that are to be executed. Instructions are received, some of them having an identifier, or mnemonic, that is used to distinguish those identified operations from other operations that do not have an identifier, or mnemonic. The mnemonic indicates a distribution of the execution of the program sequence of operations. The program sequence of operations is grouped based on the mnemonic such that certain operations are separated from other operations.
    Type: Grant
    Filed: August 16, 2018
    Date of Patent: August 9, 2022
    Assignee: Arm Limited
    Inventors: Curtis Glenn Dunham, Pavel Shamis, Jamshed Jalal, Michael Filippo
  • Patent number: 11403241
    Abstract: Methods, systems, and devices for communicating data with stacked memory dies are described. A first semiconductor die may communicate with an external computing device using a binary-symbol signal including two signal levels representing one bit of data. Semiconductor dies may be stacked on one another and include internal interconnects (e.g., through-silicon vias) to relay an internal signal generated based on the binary-symbol signal. The internal signal may be a multi-symbol signal modulated using a modulation scheme that includes three or more levels to represent more than one bit of data. The multi-level symbol signal may simplify the internal interconnects. A second semiconductor die may be configured to receive and re-transmit the multi-level symbol signal to semiconductor dies positioned above the second semiconductor die.
    Type: Grant
    Filed: May 11, 2018
    Date of Patent: August 2, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Robert Nasry Hasbun, Timothy M. Hollis, Jeffrey P. Wright, Dean D. Gans
  • Patent number: 11392525
    Abstract: Disaggregated computing architectures, platforms, and systems are provided herein. In one example, a method of operating a data processing system is provided. The method includes instructing a PCIe fabric communicatively coupling a plurality of physical computing components including PCIe devices and one or more PCIe switches to establish a first PCIe communication path between the management processor and a target PCIe device. The method also includes directing at least configuration data to the target PCIe device using the first PCIe communication path and instructing the PCIe fabric to remove the first PCIe communication path between the management processor and the target PCIe device. The method also includes instructing the PCIe fabric to establish a second PCIe communication path between a selected PCIe device and the target PCIe device configured according to the configuration data.
    Type: Grant
    Filed: February 1, 2019
    Date of Patent: July 19, 2022
    Assignee: Liqid Inc.
    Inventors: James Scott Cannata, Christopher R. Long, Phillip Clark, Sumit Puri
  • Patent number: 11379239
    Abstract: An apparatus and method are provided for making predictions for instruction flow changing instructions. The apparatus has a fetch queue that identifies a sequence of instructions to be fetched for execution by execution circuitry, and prediction circuitry for making predictions in respect of instruction flow changing instructions, and for controlling which instructions are identified in the fetch queue in dependence on the predictions. The prediction circuitry has a target prediction storage used to identify target addresses for instruction flow changing instructions that are predicted as taken.
    Type: Grant
    Filed: March 26, 2019
    Date of Patent: July 5, 2022
    Assignee: Arm Limited
    Inventors: Yasuo Ishii, Muhammad Umar Farooq
  • Patent number: 11372804
    Abstract: A processor includes a vector register configured to load data responsive to a special purpose load instruction. The processor also includes circuitry configured to replicate a selected sub-vector value from the vector register.
    Type: Grant
    Filed: May 16, 2018
    Date of Patent: June 28, 2022
    Assignee: Qualcomm Incorporated
    Inventors: Eric Mahurin, Erich Plondke, David Hoyle
  • Patent number: 11372681
    Abstract: Embodiments for allocating and reclaiming memory using dynamic buffer allocation for a slab memory allocator. The method keeps track of a count of a total number of worker threads and a count of a total number of quiesced threads, and determines if there is any free slab memory. If there is no free slab memory, the method triggers an out of memory event and increments the count of the total number of quiesced threads. It reclaims all objects currently allocated in an object pool, and allocates a buffer of a next smaller size than an original buffer until a sufficient amount of slab memory is freed.
    Type: Grant
    Filed: November 3, 2020
    Date of Patent: June 28, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Tony Wong, Abhinav Duggal, Hemanth Satyanarayana
  • Patent number: 11366771
    Abstract: An apparatus comprises a host device configured to communicate over a network with a storage system. The host device comprises a plurality of host bus adaptors, and a multi-path input-output driver configured to control delivery of input-output operations from the host device to the storage system over selected ones of a plurality of paths through the network. The paths are associated with respective initiator-target pairs wherein each of the initiators comprises a corresponding one of the host bus adaptors and each of the targets comprises a corresponding one of a plurality of ports of the storage system. The host device monitors performance of the ports in processing input-output operations delivered thereto, detects an initiator-related condition based at least in part on the monitored performance, and automatically adjusts an assignment of one or more of the initiators to one or more of the targets based at least in part on the detected initiator-related condition.
    Type: Grant
    Filed: May 2, 2019
    Date of Patent: June 21, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Erik P. Smith, Ron Arnan, Arieh Don
  • Patent number: 11340901
    Abstract: An apparatus and method are provided for controlling allocation of instructions into an instruction cache storage. The apparatus comprises processing circuitry to execute instructions, fetch circuitry to fetch instructions from memory for execution by the processing circuitry, and an instruction cache storage to store instructions fetched from the memory by the fetch circuitry. Cache control circuitry is responsive to the fetch circuitry fetching a target instruction from a memory address determined as a target address of an instruction flow changing instruction, at least when the memory address is within a specific address range, to prevent allocation of the fetched target instruction into the instruction cache storage unless the fetched target instruction is at least one specific type of instruction. It has been found that such an approach can inhibit the performance of speculation-based caching timing side-channel attacks.
    Type: Grant
    Filed: March 20, 2019
    Date of Patent: May 24, 2022
    Assignee: Arm Limited
    Inventors: Frederic Claude Marie Piry, Peter Richard Greenhalgh, Ian Michael Caulfield, Albin Pierrick Tonnerre
  • Patent number: 11327759
    Abstract: Managing the messages associated with memory pages stored in a main memory includes: receiving a message from outside the pipeline, and providing at least one low-level instruction to the pipeline for performing an operation indicated by the received message. Executing instructions in the pipeline includes: executing a series of low-level instructions in the pipeline, where the series of low-level instructions includes a first (second) set of low-level instructions converted from a first (second) high-level instruction.
    Type: Grant
    Filed: September 25, 2018
    Date of Patent: May 10, 2022
    Assignee: Marvell Asia Pte, Ltd.
    Inventors: David Albert Carlson, Shubhendu Sekhar Mukherjee, Michael Bertone, David Asher, Daniel Dever, Bradley D. Dobbie, Thomas Hummel
  • Patent number: 11327755
    Abstract: In one embodiment, a processor comprises a decoder to decode a first instruction, the first instruction comprising an opcode and at least one parameter, the opcode to identify the first instruction as an instruction associated with an indirect branch, the at least one parameter indicative of whether the indirect branch is allowed; and circuitry to generate an error message based on the at least one parameter.
    Type: Grant
    Filed: June 26, 2020
    Date of Patent: May 10, 2022
    Assignee: Intel Corporation
    Inventors: Kekai Hu, Ke Sun, Rodrigo Branco