Patents Examined by Cheng-Yuan Tseng
  • Patent number: 10795843
    Abstract: A system includes a fabric switch including a motherboard, a baseboard management controller (BMC), a network switch configured to transport network signals, and a PCIe switch configured to transport PCIe signals; a midplane; and a plurality of device ports. Each of the plurality of device ports is configured to connect a storage device to the motherboard of the fabric switch over the midplane and carry the network signals and the PCIe signals over the midplane. The storage device is configurable in multiple modes based a protocol established over a fabric connection between the system and the storage device.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: October 6, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sompong Paul Olarig, Fred Worley, Son Pham
  • Patent number: 10789182
    Abstract: In one embodiment, a system includes a bus interface including a first processor, an indirect address storage storing a number of indirect addresses, and a direct address storage storing a number of direct addresses. The system also includes a number of devices connected to the bus interface and configured to analyze data. Each device of the number of devices includes a state machine engine. The bus interface is configured to receive a command from a second processor and to transmit an address for loading into the state machine engine of at least one device of the number of devices. The address includes a first address from the number of indirect addresses or a second address from the number of direct addresses.
    Type: Grant
    Filed: December 24, 2019
    Date of Patent: September 29, 2020
    Assignee: Micron Technology, Inc.
    Inventors: Debra Bell, Paul Glendenning, David R. Brown, Harold B Noyes
  • Patent number: 10783104
    Abstract: A memory request management system may include a memory device and a memory controller. The memory controller may include a read queue, a write queue, an arbitration circuit, a read credit allocation circuit, and a write credit allocation circuit. The read queue and write queue may store corresponding requests from request streams. The arbitration circuit may send requests from the read queue and write queue to the memory device based on locations of addresses indicated by the requests. The read credit allocation circuit may send an indication of an available read credit to a request stream in response to a read request from the request stream being sent from the read queue to the memory device. The write credit allocation circuit may send an indication of an available write credit to a request stream in response to a write request from the request stream being stored at the write queue.
    Type: Grant
    Filed: October 7, 2019
    Date of Patent: September 22, 2020
    Assignee: Apple Inc.
    Inventors: Gregory S. Mathews, Lakshmi Narasimha Murthy Nukala, Thejasvi Magudilu Vijayaraj, Sukulpa Biswas
  • Patent number: 10776116
    Abstract: An instruction translation circuit, a processor circuit, and an executing method thereof are provided. The instruction translation circuit is adapted for being disposed in the processor circuit. The instruction translation circuit includes a formatted instruction queue, a first instruction translator, an instruction detection circuit, and a second instruction translator. The formatted instruction queue stores a plurality of formatted macro instructions. The first instruction translator translates a first formatted macro instruction of the formatted macro instructions and outputs a first micro instruction. When the instruction detection circuit determines that a trap bit in the first formatted macro instruction is set and a part of the first formatted macro instruction can be translated in advance, the instruction detection circuit outputs first trap information.
    Type: Grant
    Filed: August 21, 2018
    Date of Patent: September 15, 2020
    Assignee: Shanghai Zhaoxin Semiconductor Co., Ltd.
    Inventors: Chenchen Song, Xiaolong Fei, Aimin Ling, Yingbing Guan
  • Patent number: 10768938
    Abstract: A data processing system provides a branch forward instruction (BF) which has programmable parameters specifying a branch target address to be branched to and a branch point identifying a program instruction following the branch forward instruction which, when reached, is followed by a branch to the branch target address.
    Type: Grant
    Filed: March 21, 2017
    Date of Patent: September 8, 2020
    Assignee: ARM Limited
    Inventors: Thomas Christopher Grocutt, Richard Roy Grisenthwaite, Simon John Craske, François Christopher Jacques Botman, Bradley John Smith
  • Patent number: 10762032
    Abstract: An adaptive interface high availability storage device. In some embodiments, the adaptive interface high availability storage device includes: a rear storage interface connector; a rear multiplexer, connected to the rear storage interface connector; an adaptable circuit connected to the rear multiplexer; a front multiplexer, connected to the adaptable circuit; and a front storage interface connector, connected to the front multiplexer. The adaptive interface high availability storage device may be configured to operate in a single-port state or in a dual-port state. The adaptive interface high availability storage device may be configured: in the single-port state, to present a single-port host side storage interface according to a first storage protocol at the rear storage interface connector, and in the dual-port state, to present a dual-port host side storage interface according to the first storage protocol at the rear storage interface connector.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: September 1, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Sompong Paul Olarig
  • Patent number: 10762009
    Abstract: A data processing system includes: a first memory system coupled to a host through a first external channel, a second memory system coupled to the host through a second external channel, and an internal channel suitable for coupling the first and second memory systems with each other, the host, when read-requesting first and second data to the first memory system, transfers a first external channel control information for selecting sole use of the first external channel or simultaneous use of the first and second external channels, to the first and second memory systems, the first memory system, when the first external channel control information indicates simultaneous use, the first memory system outputs the first data through the first external channel and outputs the second data through the internal channel, and the second memory system outputs the second data inputted through the internal channel, through the second external channel.
    Type: Grant
    Filed: May 24, 2019
    Date of Patent: September 1, 2020
    Assignee: SK hynix Inc.
    Inventor: Hae-Gi Choi
  • Patent number: 10761848
    Abstract: Systems and methods for implementing an integrated circuit with core-level predication includes: a plurality of processing cores of an integrated circuit, wherein each of the plurality of cores includes: a predicate stack defined by a plurality of single-bit registers that operate together based on one or more of logical connections and physical connections of the plurality of single-bit registers, wherein: the predicate stack of each of the plurality of processing cores includes a top of stack single-bit register of the plurality of single-bit registers having a bit entry value that controls whether select instructions to the given processing core of the plurality of processing cores is executed.
    Type: Grant
    Filed: February 18, 2020
    Date of Patent: September 1, 2020
    Assignee: quadric.io, Inc.
    Inventors: Nigel Drego, Ananth Durbha, Aman Sikka, Mrinalini Ravichandran, Daniel Firu, Veerbhan Kheterpal
  • Patent number: 10726329
    Abstract: Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements performs flow-based computations on wavelets of data. Each processing element has a respective compute element and a respective routing element. Instructions executed by the compute element include operand specifiers, some specifying a data structure register storing a data structure descriptor describing an operand as a fabric vector or a memory vector. The data structure descriptor further describes the memory vector as one of a one-dimensional vector, a four-dimensional vector, or a circular buffer vector. Optionally, the data structure descriptor specifies an extended data structure register storing an extended data structure descriptor. The extended data structure descriptor specifies parameters relating to a four-dimensional vector or a circular buffer vector.
    Type: Grant
    Filed: April 17, 2018
    Date of Patent: July 28, 2020
    Assignee: Cerebras Systems Inc.
    Inventors: Sean Lie, Michael Morrison, Srikanth Arekapudi, Gary R. Lauterbach, Michael Edwin James
  • Patent number: 10725783
    Abstract: According to one or more embodiments, an example computer-implemented method for executing one or more out-of-order instructions by a processing unit, includes decoding an instruction to be executed, and based on a determination that the instruction is a store instruction, identifying a split load-hit-store (LHS) table for the store instruction, wherein a LHS table of the processing unit includes multiple split LHS tables. Identifying the split LHS table includes determining, for the store instruction, a first split LHS table by performing a mod operation using one or more operands from the store instruction, and adding one or more parameters of the store instruction in the first split LHS table by generating an ITAG for the store instruction. The method further includes dispatching the store instruction for execution to an issue queue with the ITAG.
    Type: Grant
    Filed: November 2, 2018
    Date of Patent: July 28, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ehsan Fatehi, Richard J. Eickemeyer, Edmund J. Gieske
  • Patent number: 10725676
    Abstract: Apparatus and method for configuring a data storage device as a write once read many (WORM) drive. In some embodiments, the storage device has a rotatable disc with at least one data recording layer, and a data transducer that is selectively moveable with respect to the rotatable disc. The data transducer has a write element configured to write data to the data recording layer, and a read element configured to read data from the data recording layer. A control circuit is configured to physically disable the write element in response to a write element disable signal. The disabling of the write element prevents further writing of data to the data recording layer. The read element remains operative to continue reading data from the data recording layer after the write element has been disabled.
    Type: Grant
    Filed: December 13, 2018
    Date of Patent: July 28, 2020
    Assignee: Seagate Technology, LLC
    Inventor: Christopher Nicholas Allo
  • Patent number: 10725944
    Abstract: Implementations are provided herein for systems, methods, and a non-transitory computer product configured to analyze an input/output (IO) pattern for a data storage system, to identify an application type based on the IO pattern, and to select optimal deduplication and compression configurations based on the application type. The teachings herein facilitate machine learning of various metrics and the interrelations between these metrics, such as past IO patterns, application types, deduplication configurations, compression configurations, and overall system performance. These metrics and interrelations can be stored in a data lake. In some embodiments, data objects can be segmented in order to optimize configurations with more granularity. In additional embodiments, predictive techniques are used to select deduplication and compression configurations.
    Type: Grant
    Filed: March 6, 2020
    Date of Patent: July 28, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Nickolay Dalmatov, Kirill Bezugly
  • Patent number: 10719323
    Abstract: Disclosed embodiments relate to matrix compress/decompress instructions. In one example, a processor includes fetch circuitry to fetch a compress instruction having a format with fields to specify an opcode and locations of decompressed source and compressed destination matrices, decode circuitry to decode the fetched compress instructions, and execution circuitry, responsive to the decoded compress instruction, to: generate a compressed result according to a compress algorithm by compressing the specified decompressed source matrix by either packing non-zero-valued elements together and storing the matrix position of each non-zero-valued element in a header, or using fewer bits to represent one or more elements and using the header to identify matrix elements being represented by fewer bits; and store the compressed result to the specified compressed destination matrix.
    Type: Grant
    Filed: September 27, 2018
    Date of Patent: July 21, 2020
    Assignee: Intel Corporation
    Inventors: Dan Baum, Michael Espig, James Guilford, Wajdi K. Feghali, Raanan Sade, Christopher J. Hughes, Robert Valentine, Bret Toll, Elmoustapha Ould-Ahmed-Vall, Mark J. Charney, Vinodh Gopal, Ronen Zohar, Alexander F. Heinecke
  • Patent number: 10705998
    Abstract: A processing system comprising: multiple processor modules, each comprising a respective execution unit memory; and an interconnect for exchanging data between different sets of the processor modules. A group of the processor modules operates in a series of BSP supersteps. For the exchange phase of each superstep, each receiving processor module that is to receive data from outside its own set is pre-programmed with a value representing the number of units of data to receive. Starting from the pre-programmed value, it then counts out the number of data units remaining to be received each time a data unit is received. Each receiving processor module is further arranged to perform an exchange synchronization whereby, before advancing from the exchange phase to the compute phase of the current superstep, the receiving processor module waits until no units of data remain to be received according to the count.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: July 7, 2020
    Assignee: Graphcore Limited
    Inventors: Daniel John Pelham Wilkinson, Alan Graham Alexander
  • Patent number: 10705996
    Abstract: A first operation identifier is assigned to a first operation directed to a memory component, the first operation identifier having an entry in a first data structure that associates the first operation identifier with a first plurality of buffer identifiers. It is determined whether the first operation collides with a prior operation assigned a second operation identifier, the second operation identifier having an entry in the first data structure that associates the second operation identifier with a second plurality of buffer identifiers. It is determined whether the first operation is a read or a write operation. In response to determining that the first operation collides with the prior operation and that the first operation is a read operation, the first plurality of buffer identifiers are updated with a buffer identifier included in the second plurality of buffer identifiers.
    Type: Grant
    Filed: November 25, 2019
    Date of Patent: July 7, 2020
    Assignee: MICRON TECHNOLOGY, INC.
    Inventors: Lyle E. Adams, Mark Ish, Pushpa Seetamraju, Karl D. Schuh, Dan Tupy
  • Patent number: 10705995
    Abstract: The following description is directed to a configurable logic platform. In one example, a configurable logic platform includes host logic and a plurality of reconfigurable logic regions. Each reconfigurable region can include hardware that is configurable to implement an application logic design. The host logic can be used for separately encapsulating each of the reconfigurable logic regions. The host logic can include a plurality of data path functions where each data path function can include a layer for formatting data transfers between a host interface and the application logic of a corresponding reconfigurable logic region. The host interface can be configured to apportion bandwidth of the data transfers generated by the application logic of the respective reconfigurable logic regions.
    Type: Grant
    Filed: March 21, 2019
    Date of Patent: July 7, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Asif Khan, Islam Mohamed Hatem Abdulfattah Mohamed Atta, Robert Michael Johnson, Mark Bradley Davis, Christopher Joseph Pettey, Nafea Bshara, Erez Izenberg
  • Patent number: 10705999
    Abstract: A processing system comprising: multiple processor modules, each comprising a respective execution unit memory; and an interconnect for exchanging data between different sets of the processor modules. A group of the processor modules operates in a series of steps. For an exchange phase of each step by each receiving processor module that is to receive data from outside its own set, the receiving module is pre-programmed with a value representing the number of units of data to receive. Starting from the pre-programmed value, it then counts out the number of data units remaining to be received each time a data unit is received. Each receiving processor module is further arranged to perform an exchange synchronization whereby, before advancing from the exchange phase to the compute phase of the current step, the receiving processor module waits until no units of data remain to be received according to the count.
    Type: Grant
    Filed: August 22, 2019
    Date of Patent: July 7, 2020
    Assignee: GRAPHCORE LIMITED
    Inventors: Daniel John Pelham Wilkinson, Alan Graham Alexander
  • Patent number: 10684670
    Abstract: Methods and apparatus for an inter-processor communication (IPC) link between two (or more) independently operable processors. In one aspect, the IPC protocol is based on a “shared” memory interface for run-time processing (i.e., the independently operable processors each share (either virtually or physically) a common memory interface). In another aspect, the IPC communication link is configured to support a host driven boot protocol used during a boot sequence to establish a basic communication path between the peripheral and the host processors. Various other embodiments described herein include sleep procedures (as defined separately for the host and peripheral processors), and error handling.
    Type: Grant
    Filed: April 22, 2019
    Date of Patent: June 16, 2020
    Assignee: Apple Inc.
    Inventors: Karan Sanghi, Saurabh Garg, Haining Zhang
  • Patent number: 10678546
    Abstract: Instructions and logic provide SIMD vector population count functionality. Some embodiments store in each data field of a portion of n data fields of a vector register or memory vector, at least two bits of data. In a processor, a SIMD instruction for a vector population count is executed, such that for that portion of the n data fields in the vector register or memory vector, the occurrences of binary values equal to each of a first one or more predetermined binary values, are counted and the counted occurrences are stored, in a portion of a destination register corresponding to the portion of the n data fields in the vector register or memory vector, as a first one or more counts corresponding to the first one or more predetermined binary values.
    Type: Grant
    Filed: January 11, 2019
    Date of Patent: June 9, 2020
    Assignee: Intel Corporation
    Inventors: Terence Sych, Elmoustapha Ould-Ahmed-Vall
  • Patent number: 10678506
    Abstract: An apparatus and a method of operating the apparatus are provided for performing a comparison operation to match a given sequence of values within an input vector. Instruction decoder circuitry is responsive to a string match instruction specifying a segment of an input vector to generate control signals to control the data processing circuitry to perform a comparison operation. The comparison operation determines a comparison value indicative of whether each input element of a required set of consecutive input elements of the segment has a value which matches a respective value in consecutive reference elements of the reference data item. A plurality of comparison operations may be performed to determine a match vector corresponding to the segment of the input vector to indicate the start position of the substring in the input vector. A string match instruction, as well as simulator virtual machine implementations, are also provided.
    Type: Grant
    Filed: August 1, 2017
    Date of Patent: June 9, 2020
    Assignee: ARM Limited
    Inventors: Alejandro Martinez Vicente, Jesse Garrett Beu, Mbou Eyole, Timothy Hayes