Varying Address Bit-length Or Size Patents (Class 711/212)
  • Patent number: 11960897
    Abstract: In some implementations, a processor includes a plurality of parallel instruction pipes, a register file includes at least one shared read port configured to be shared across multiple pipes of the plurality of parallel instruction pipes. Control logic controls multiple parallel instruction pipes to read from the at least one shared read port. In certain examples, the at least one shared register file read port is coupled as a single read port for one of the parallel instruction pipes and as a shared register file read port for a plurality of other parallel instruction pipes.
    Type: Grant
    Filed: July 30, 2021
    Date of Patent: April 16, 2024
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Michael Estlick, Erik Swanson, Eric Dixon, Todd Baumgartner
  • Patent number: 11954048
    Abstract: An apparatus has memory management circuitry to control access to a memory system based on access control information defined in table entries of a table structure comprising at least two levels of access control table. Table accessing circuitry accesses the table structure to obtain the access control information corresponding to a target address. For a given access control table at a given level of the table structure other than a starting level, the table accessing circuitry selects a selected table entry of the given access control table corresponding to the target address, based on an offset portion of the target address. A size of the offset portion is selected based on a variable nesting control parameter specified in a table entry of a higher-level access control table at a higher level of the table structure than the given access control table.
    Type: Grant
    Filed: April 14, 2021
    Date of Patent: April 9, 2024
    Assignee: Arm Limited
    Inventors: Jason Parker, Yuval Elad, Alexander Donald Charles Chadwick, Andrew Brookfield Swaine, Carlos Garcia-Tobin
  • Patent number: 11907720
    Abstract: There is provided a data processing apparatus comprising a plurality of registers, each of the registers having data bits to store data and metadata bits to store metadata. Each of the registers is adapted to operate in a metadata mode in which the metadata bits and the data bits are valid, and a data mode in which the data bits are valid and the metadata bits are invalid. Mode bit storage circuitry indicates whether each of the registers is in the data mode or the metadata mode. Execution circuitry is responsive to a memory operation that is a store operation on one or more given registers.
    Type: Grant
    Filed: November 26, 2020
    Date of Patent: February 20, 2024
    Assignee: Arm Limited
    Inventors: Bradley John Smith, Thomas Christopher Grocutt
  • Patent number: 11899624
    Abstract: A system and method for random-access manipulation of compacted data files, utilizing a reference codebook, a random-access engine, a data deconstruction engine, and a data deconstruction engine. The system may receive a data query pertaining to a data read or data write request, wherein the data file to be read from or written to is a compacted data file. A random-access engine may facilitate data manipulation processes by accessing a reference codebook associated with the compacted data file, a frequency table used to construct the reference codebook, and data query details. A data read request is supported by random-access search capabilities that may enable the locating and decoding of the bits corresponding to data query details. A random-access engine facilitates data write processes. The random-access engine may encode the data to be written, insert the encoded data into a compacted data file, and update the codebook as needed.
    Type: Grant
    Filed: December 9, 2022
    Date of Patent: February 13, 2024
    Assignee: ATOMBEAM TECHNOLOGIES INC.
    Inventors: Aliasghar Riahi, Joshua Cooper, Mojgan Haddad, Charles Yeomans
  • Patent number: 11893596
    Abstract: An open loop cashless payment system incents a consumer account holder to transact in a physical store with a merchant who agrees to make an auditable donation to a charity when the transaction is conducted on an account issued to the consumer account holder. The consumer account holder may direct the donation to a specific charity within a predetermined geographically determined community where the transaction was physically conducted. The consumer account holder can register an obligation to make a donation matching that of the merchant, where the consumer account holder's donation is initially paid by the consumer account's issuer for reimbursement by the consumer account holder to the issuer after the consumer account holder receives their account statement. The merchant's acquirer, the issuer, and a transaction handler for the issuer and acquirer may also make donations as directed by the consumer account holder.
    Type: Grant
    Filed: November 21, 2022
    Date of Patent: February 6, 2024
    Assignee: EDATANETWORKS INC
    Inventors: Terrance Patrick Tietzen, Matthew Arnold Macpherson Bates
  • Patent number: 11853754
    Abstract: Provided is a mask operation method for an explicit independent mask register in a GPU. The method comprises: each GPU hardware thread being able to access respective eight 128-bit-wide independent mask registers, which are recorded as $m0-$m7. With regard to mask operation instructions of the explicit independent mask register in the GPU, each hardware thread in the GPU is able to access respective eight 128-bit-wide independent mask registers, and four groups of mask operation instructions are available for a user, and respectively realize a reduction operation, an extension operation and a logic operation on the mask register, and data movement between the mask register and a general vector register.
    Type: Grant
    Filed: June 11, 2020
    Date of Patent: December 26, 2023
    Inventors: Chengxin Yin, Lei Wang
  • Patent number: 11775298
    Abstract: Methods for frequency scaling for per-core accelerator assignments and associated apparatus. A processor includes a CPU (central processing unit) having multiple cores that can be selectively configured to support frequency scaling and instruction extensions. Under this approach, some cores can be configured to support a selective set of AVX instructions (such as AVX3/5G-ISA instructions) and/or AMX instructions, while other cores are configured to not support these AVX/AMX instructions. In one aspect, the selective AVX/AMX instructions are implemented in one or more ISA extension units that are separate from the main processor core (or otherwise comprises a separate block of circuitry in a processor core) that can be selectively enabled or disabled. This enables cores having the separate unit(s) disabled to consume less power and/or operate at higher frequencies, while supporting the selective AVX/AMX instructions using other cores.
    Type: Grant
    Filed: July 20, 2020
    Date of Patent: October 3, 2023
    Assignee: Intel Corporation
    Inventors: Stephen T. Palermo, Srihari Makineni, Shubha Bommalingaiahnapallya, Neelam Chandwani, Rany T. Elsayed, Udayan Mukherjee, Lokpraveen Mosur, Adwait Purandare
  • Patent number: 11734209
    Abstract: Implementations of the disclosure provide processing device comprising: an interrupt managing circuit to receive an interrupt message directed to an application container from an assignable interface (AI) of an input/output (I/O) device. The interrupt message comprises an address space identifier (ASID), an interrupt handle and a flag to distinguish the interrupt message from a direct memory access (DMA) message. Responsive to receiving the interrupt message, a data structure associated with the interrupt managing circuit is identified. An interrupt entry from the data structure is selected based on the interrupt handle. It is determined that the ASID associated with the interrupt message matches an ASID in the interrupt entry. Thereupon, an interrupt in the interrupt entry is forwarded to the application container.
    Type: Grant
    Filed: December 14, 2021
    Date of Patent: August 22, 2023
    Assignee: Intel Corporation
    Inventors: Sanjay Kumar, Rajesh M. Sankaran, Philip R. Lantz, Utkarsh Y. Kakaiya, Kun Tian
  • Patent number: 11699492
    Abstract: A storage system includes: a memory controller which provides a clock signal; a buffer which receives the clock signal and re-drives the clock signal, the buffer including a sampler which receives a data signal and a data strobe signal regarding the data signal, and which outputs a data stream; and a nonvolatile memory, including: a first duty cycle corrector, which receives the clock signal outputs a corrected clock signal by performing a first duty correction operation on the clock signal; and a data strobe signal generator, which generates the data strobe signal based on the corrected clock signal and provides the data strobe signal to the buffer. The buffer receives the data strobe signal output from the nonvolatile memory, senses a duty ratio of the data strobe signal input to the sampler, and performs a second duty correction operation on the duty ratio of the input data strobe signal.
    Type: Grant
    Filed: July 19, 2021
    Date of Patent: July 11, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: TongSung Kim, Dae Hoon Na, Jung-June Park, Dong Ho Shin, Byung Hoon Jeong, Young Min Jo
  • Patent number: 11616833
    Abstract: An information processing apparatus includes an extracting unit that extracts, based on attribute information of an object necessary for using a service provided by a service system and information related to a user of the service system, a candidate for the user to be invited to the service, out of users who are yet to use the service.
    Type: Grant
    Filed: March 14, 2019
    Date of Patent: March 28, 2023
    Assignee: FUJIFILM Business Innovation Corp.
    Inventor: Akio Yamashita
  • Patent number: 11604740
    Abstract: Methods and systems disclosed herein describe obfuscating plaintext cryptographic material stored in memory. A random location in an obfuscation buffer may be selected for each byte of the plaintext cryptographic material. The location of each byte of the plaintext cryptographic material may be stored in a position tracking buffer. To recover the scrambled plaintext cryptographic material, the location of each byte of the plaintext cryptographic material may be read from the position tracking buffer. Each byte of the plaintext cryptographic material may then be read from the obfuscation buffer and written to a temporary buffer. When each byte of the plaintext cryptographic material is recovered, the plaintext cryptographic material may be used to perform one or more cryptographic operations. The scrambling techniques described herein reduce the likelihood of a malicious user recovering plaintext cryptographic material while stored in memory.
    Type: Grant
    Filed: December 1, 2020
    Date of Patent: March 14, 2023
    Assignee: Capital One Services, LLC
    Inventors: Hao Cheng, Rohit Joshi, Lan Xie
  • Patent number: 11550735
    Abstract: Memory access control circuitry controls handling of a memory access request based on at least one memory access control attribute associated with a region of address space including the target address. The memory access control circuitry comprises: lookup circuitry comprising a plurality of sets of comparison circuitry, each set of comparison circuitry to detect, based on at least one address-region-indicating parameter associated with a corresponding region of address space, whether the target address is within the corresponding region of address space; region mismatch prediction circuitry to provide a region mismatch prediction indicative of which of the sets of comparison circuitry is predicted to detect a region mismatch condition; and comparison disabling circuitry to disable at least one of the sets of comparison circuitry that is predicted by the region mismatch prediction circuitry to detect the region mismatch condition for the target address.
    Type: Grant
    Filed: September 27, 2021
    Date of Patent: January 10, 2023
    Assignee: Arm Limited
    Inventors: François Christopher Jacques Botman, Thomas Christopher Grocutt, Jack William Derek Andrew
  • Patent number: 11552653
    Abstract: A quantum decoder receives a syndrome from a quantum measurement circuit and performs various decoding operations for processing-efficient fault detection. The decoding operations include generating a decoding graph from the syndrome and growing a cluster around each one of multiple check nodes in the graph that correspond to a non-trivial value in the syndrome. Each cluster includes the check node corresponding to the non-trivial value and a set of neighboring nodes positioned within a distance of d edge-lengths from the check node. Following cluster growth, the decoder determines if, for each cluster, there exists a solution set internal to the cluster that fully explains the non-trivial syndrome bit for the cluster. If so, the decoder identifies and returns at least one solution set that fully explains the set of non-trivial bits in the syndrome.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: January 10, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Nicolas Guillaume Delfosse, Michael Edward Beverland, Vivien Londe, Jeongwan Haah
  • Patent number: 11537527
    Abstract: Methods, systems, and devices for dynamic logical page sizes for memory devices are described. A memory device may use an initial set of logical pages each having a same size and one or more logical-to-physical (L2P) tables to map logical addresses of the logical pages to the physical addresses of corresponding physical pages. As commands are received from a host device, the memory device may dynamically split a logical page to introduce smaller logic pages if the host device accesses data in chunk sizes smaller than the size of the logical page that is split. The memory device may maintain one or more additional L2P tables for each smaller logical page size that is introduced, along with one or more pointer tables to map between L2P tables and entries for larger logical page sizes and L2P tables and entries associated with smaller logical page sizes.
    Type: Grant
    Filed: December 10, 2020
    Date of Patent: December 27, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Sharath Chandra Ambula, David Aaron Palmer, Venkata Kiran Kumar Matturi, Sri Ramya Pinisetty, Sushil Kumar
  • Patent number: 11436525
    Abstract: A software-defined radio system may include a radio frequency front end connected to a high performance computing processor comprised of a central processing unit (CPU), a graphics processing unit (GPU), and a shared memory between the CPU and GPU. The software-defined radio system may incorporate a signal processing unit between the radio frequency front end and the high performance computing processor. Additionally, the software-defined radio system may be configured to create a ring buffer in a shared memory between the CPU and GPU and directly store digital signal data in the ring buffer. The software-defined radio system may be used to implement and train machine learning algorithms and transmit digital signals.
    Type: Grant
    Filed: November 30, 2018
    Date of Patent: September 6, 2022
    Assignee: DEEPWAVE DIGITAL, INC.
    Inventors: John D. Ferguson, Jr., William M. Kirschner, Peter E. Witkowski
  • Patent number: 11245415
    Abstract: Methods, systems, and techniques for data compression. A cluster fingerprint of an uncompressed data block is determined to correspond to a cluster fingerprint of a base block stored in a base array. This determining involves looking up the cluster fingerprint of the first base block from the base array using the cluster fingerprint of the first uncompressed data block. The difference between the uncompressed data block and the base block is determined, and a compressed data block is encoded using this difference. The compressed data block is then stored in a data array.
    Type: Grant
    Filed: March 12, 2021
    Date of Patent: February 8, 2022
    Assignee: THE UNIVERSITY OF BRITISH COLUMBIA UNIVERSITY-INDUSTRY LIAISON OFFICE
    Inventors: Amin Ghasemazar, Prashant Jayaprakash Nair, Mieszko Lis
  • Patent number: 10970086
    Abstract: Short pointers are used for more efficient utilization of random access memory (RAM) in resource-constrained embedded systems. Such a system includes a processor having an address space; and a RAM that stores variables used by the processor, including pointer variables. The processor has X-bit architecture, and a standard C/C++ (native) pointer variable occupies X bits in RAM. In such a system, select pointers are stored in RAM in a form of short pointer variables as respective Y-bit segments, instead of as standard C/C++ pointer variables that would be stored as X-bit segments, where Y is less than X. Select short pointers are converted to respective native pointers to perform an operation in the memory system for which pointers are used. After the operation is performed, each native pointer is converted back to the corresponding short pointer and stored in the RAM.
    Type: Grant
    Filed: January 8, 2020
    Date of Patent: April 6, 2021
    Assignee: SK hynix Inc.
    Inventor: Ihar Miraniuk
  • Patent number: 10846090
    Abstract: A machine instruction is provided that includes an opcode field to provide an opcode, the opcode to identify a perform pseudorandom number operation, and a register field to be used to identify a register, the register to specify a location in memory of a first operand to be used. The machine instruction is executed, and execution includes for each block of memory of one or more blocks of memory of the first operand, generating a hash value using a 512 bit secure hash technique and at least one seed value of a parameter block of the machine instruction; and storing at least a portion of the generated hash value in a corresponding block of memory of the first operand, the generated hash value being at least a portion of a pseudorandom number.
    Type: Grant
    Filed: October 25, 2018
    Date of Patent: November 24, 2020
    Assignee: International Business Machines Corporation
    Inventors: Dan F. Greiner, Bernd Nerz, Tamas Visegrady
  • Patent number: 10789209
    Abstract: In one aspect, the invention is directed to a method of expanding storage for filesystems in a fine-grained, scalable manner. The method includes determining, by a file server, a run bias for a span, wherein the run bias indicates a number of contiguous chunks of memory associated with an entry in an address translation table for a filesystem. The method includes receiving, by the file server, a request for an expansion of memory for the filesystem. The method includes scoring, by the chunk allocator, each stripe set in a group of stripe sets based at least in part on a number of unused chunks on the stripeset and a number of chunks on the stripeset being used by the filesystem. The method includes allocating, by the chunk allocator, a chunk on the stripeset with the highest score, wherein the allocated chunk lies outside of runs reserved for other filesystems.
    Type: Grant
    Filed: February 1, 2013
    Date of Patent: September 29, 2020
    Assignee: HITACHI VANTARA LLC
    Inventor: Mark Stephen Laker
  • Patent number: 10754658
    Abstract: An apparatus includes an arithmetic circuit that performs a pipeline operation on first data as an input; and a determination circuit that determines, based on pipeline operation results, whether to perform the pipeline operation by inputting, to the arithmetic circuit, second data different from the first data, wherein when the determination circuit has determined that the pipeline operation is to be performed by inputting the second data to the arithmetic circuit, the arithmetic circuit suspends the pipeline operation using the second data thereof, and performs the pipeline operation with the first data input until the second data is input, and when the second data is input, the arithmetic circuit resumes the pipeline operation using the second data.
    Type: Grant
    Filed: July 27, 2018
    Date of Patent: August 25, 2020
    Assignee: FUJITSU LIMITED
    Inventor: Yutaka Tamiya
  • Patent number: 10613862
    Abstract: An instruction architecturally defined to be a looping instruction, in which a loop is configured to repeat a plurality of times to perform an operation on up to a defined number of units of data, is to be processed. The processing includes replicating a selected character a number of times to provide a replicated selected character, and using a sequence of operations to perform the operation, the sequence of operations replacing the loop and providing a non-looping sequence to perform the operation on up to the defined number of units of data. The sequence of operations is configured to repeat one or more times, and to terminate based on the replicated selected character.
    Type: Grant
    Filed: March 3, 2017
    Date of Patent: April 7, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Michael K. Gschwind
  • Patent number: 10592421
    Abstract: Instructions and logic provide advanced paging capabilities for secure enclave page caches. Embodiments include multiple hardware threads or processing cores, a cache to store secure data for a shared page address allocated to a secure enclave accessible by the hardware threads. A decode stage decodes a first instruction specifying said shared page address as an operand, and execution units mark an entry corresponding to an enclave page cache mapping for the shared page address to block creation of a new translation for either of said first or second hardware threads to access the shared page. A second instruction is decoded for execution, the second instruction specifying said secure enclave as an operand, and execution units record hardware threads currently accessing secure data in the enclave page cache corresponding to the secure enclave, and decrement the recorded number of hardware threads when any of the hardware threads exits the secure enclave.
    Type: Grant
    Filed: August 29, 2016
    Date of Patent: March 17, 2020
    Assignee: Intel Corporation
    Inventors: Carlos V. Rozas, Ilya Alexandrovich, Ittai Anati, Alex Berenzon, Michael A. Goldsmith, Barry E. Huntley, Anton Ivanov, Simon P. Johnson, Rebekah M. Leslie-Hurd, Francis X. McKeen, Gilbert Neiger, Rinat Rappoport, Scott D. Rodgers, Uday R. Savagaonkar, Vincent R. Scarlata, Vedvyas Shanbhogue, Wesley H. Smith, William C. Wood
  • Patent number: 10592407
    Abstract: Short pointer mode applications are able to execute in long pointer mode environments. A plurality of actions is performed to prepare a short pointer mode application for execution in the long pointer mode environment. These actions include allocating memory for one or more in-memory short pointers of the application. The memory being allocated for an in-memory short pointer is of a size corresponding to a size of the in-memory short pointer. Further, a register is allocated for an in-register short pointer of the application. The register is allocated at a size corresponding to a long pointer mode. The size corresponding to the long pointer mode is different from the size of the in-memory short pointer.
    Type: Grant
    Filed: July 22, 2016
    Date of Patent: March 17, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Michael K. Gschwind
  • Patent number: 10585790
    Abstract: Short pointer mode applications are able to execute in long pointer mode environments. A plurality of actions is performed to prepare a short pointer mode application for execution in the long pointer mode environment. These actions include allocating memory for one or more in-memory short pointers of the application. The memory being allocated for an in-memory short pointer is of a size corresponding to a size of the in-memory short pointer. Further, a register is allocated for an in-register short pointer of the application. The register is allocated at a size corresponding to a long pointer mode. The size corresponding to the long pointer mode is different from the size of the in-memory short pointer.
    Type: Grant
    Filed: November 3, 2017
    Date of Patent: March 10, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Michael K. Gschwind
  • Patent number: 10545992
    Abstract: A method, system and computer program product for providing consolidated access to data of a plurality of source databases. Tables of each of the source databases are replicated to a shared accelerator. The source DBMSs are configured to dispatch queries to the accelerator for accelerating query execution. The accelerator is configured such that the replicated tables can only be accessed by the source DBMS having provided said tables for executing a dispatched query. A user can select one of the source DBMSs to act as a consolidated DBMS—C-DBMS. The C-DBMS provides the consolidated access. The user is enabled to select tables managed by another one of the DBMSs. In response to receiving the selection of the tables, the accelerator is re-configuring such that the C-DBMS is granted access also to the copies of the selected tables in the accelerator.
    Type: Grant
    Filed: October 23, 2018
    Date of Patent: January 28, 2020
    Assignee: International Business Machines Corporation
    Inventors: Peter Bendel, Oliver Benke, Namik Hrle, Ruiping Li, Daniel Martin, Maryela E. Weihrauch
  • Patent number: 10528345
    Abstract: Instructions and logic provide atomic range operations in a multiprocessing system. In one embodiment an atomic range modification instruction specifies an address for a set of range indices. The instruction locks access to the set of range indices and loads the range indices to check the range size. The range size is compared with a size sufficient to perform the range modification. If the range size is sufficient to perform the range modification, the range modification is performed and one or more modified range indices of the set of range indices is stored back to memory. Otherwise an error signal is set when the range size is not sufficient to perform said range modification. Access to the set of range indices is unlocked responsive to completion of the atomic range modification instruction. Embodiments may include atomic increment next instructions, add next instructions, decrement end instructions, and/or subtract end instructions.
    Type: Grant
    Filed: March 27, 2015
    Date of Patent: January 7, 2020
    Assignee: Intel Corporation
    Inventors: Ilan Pardo, Oren Ben-Kiki, Arch D. Robison, Nadav Chachmon, James H. Cownie
  • Patent number: 10481816
    Abstract: Apparatus, systems, methods, and computer program products for providing dynamically assignable data latches are disclosed. A non-volatile memory die includes a non-volatile memory medium. A plurality of sets of data latches of a non-volatile memory die are configured to facilitate transmission of data to and from a non-volatile memory medium, and each of the sets of data latches are associated with a different identifier. An on-die controller is in communication with a sets of data latches. An on-die controller is configured to receive a first command for a first memory operation comprising a selected identifier. An on-die controller is configured to execute a first memory operation on a non-volatile memory medium using a set of data latches of a plurality of sets of data latches, and the set of data latches is associated with a selected identifier.
    Type: Grant
    Filed: October 18, 2017
    Date of Patent: November 19, 2019
    Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.
    Inventors: Mark Shlick, Hadas Oshinsky, Amir Shaharabany, Yoav Markus
  • Patent number: 10452682
    Abstract: A method, system and computer program product for providing consolidated access to data of a plurality of source databases. Tables of each of the source databases are replicated to a shared accelerator. The source DBMSs are configured to dispatch queries to the accelerator for accelerating query execution. The accelerator is configured such that the replicated tables can only be accessed by the source DBMS having provided said tables for executing a dispatched query. A user can select one of the source DBMSs to act as a consolidated DBMS—C-DBMS. The C-DBMS provides the consolidated access. The user is enabled to select tables managed by another one of the DBMSs. In response to receiving the selection of the tables, the accelerator is re-configuring such that the C-DBMS is granted access also to the copies of the selected tables in the accelerator.
    Type: Grant
    Filed: January 5, 2017
    Date of Patent: October 22, 2019
    Assignee: International Business Machines Corporation
    Inventors: Peter Bendel, Oliver Benke, Namik Hrle, Ruiping Li, Daniel Martin, Maryela E. Weihrauch
  • Patent number: 10324723
    Abstract: Disclosed is a digital processor comprising an instruction memory having a first input, a second input, a first output, and a second output. A program counter register is in communication with the first input of the instruction memory. The program counter register is configured to store an address of an instruction to be fetched. A data pointer register is in communication with the second input of the instruction memory. The data pointer register is configured to store an address of a data value in the instruction memory. An instruction buffer is in communication with the first output of the instruction memory. The instruction buffer is arranged to receive an instruction according to a value at the program counter register. A data buffer is in communication with the second output of the instruction memory. The data buffer is arranged to receive a data value according to a value at the data pointer register.
    Type: Grant
    Filed: July 2, 2014
    Date of Patent: June 18, 2019
    Assignee: NXP USA, Inc.
    Inventors: Peter J Wilson, Brian C Kahne, Jeffrey W Scott
  • Patent number: 10140126
    Abstract: A variable length instruction processor system and method is provided. Before a processor core executes an instruction, the system and method applied in a processor field convert the instruction into micro-operation(s) and the micro-operation(s) can be filled into a cache system that can be directly accessed by a processor core, reducing the depth of a pipeline and improving efficiency of the pipeline.
    Type: Grant
    Filed: August 15, 2014
    Date of Patent: November 27, 2018
    Assignee: Shanghai XinHao Microelectronics Co. Ltd.
    Inventor: Kenneth Chenghao Lin
  • Patent number: 10133653
    Abstract: Recording and playback of trace log data and video log data for programs is described. In one aspect, a method for viewing log data recorded during execution of a program includes causing a display of recorded images depicting prior visual user interaction with the program during a particular time period. The method also includes causing a display of messages tracing and describing prior execution of the program during the particular time period. The display of the messages and the display of the recorded images are synchronized.
    Type: Grant
    Filed: February 23, 2012
    Date of Patent: November 20, 2018
    Assignee: Cadence Design Systems, Inc.
    Inventors: Donald J. O'Riordan, David Varghese
  • Patent number: 9990282
    Abstract: An address range expander associated with a processor and a physical memory device determines that address transformation has been enabled with respect to an address indicated on the processor's address bus. The expander generates, using one or more address expansion parameter registers, a transformed address corresponding to the untransformed address within an address range of the physical memory device, and transmits the transformed address to a controller of the physical memory device.
    Type: Grant
    Filed: April 27, 2016
    Date of Patent: June 5, 2018
    Assignee: Oracle International Corporation
    Inventors: Joseph Wright, Erik Michael Schlanger, Eric DeVolder
  • Patent number: 9798551
    Abstract: A method and apparatus for providing a scalable compute fabric are provided herein. The method includes determining a workflow for processing by the scalable compute fabric, wherein the workflow is based on an instruction set. A pipeline in configured dynamically for processing the workflow, and the workflow is executed using the pipeline.
    Type: Grant
    Filed: March 25, 2016
    Date of Patent: October 24, 2017
    Assignee: Intel Corporation
    Inventors: Scott Krig, Teresa Morrison
  • Patent number: 9535702
    Abstract: An asset management method implemented on an integrated circuit uses a keys memory storing keys, each key being associated with an asset identifier, and a data memory storing asset information. The method comprises: receiving an input command for an asset comprising an asset identifier and asset information, computing addresses to Keys memory from the asset identifier, the computing addresses comprising calculating hashes from the asset identifier, finding or allocating an entry in keys memory for the asset, based on the computed set of addresses, depending on the input command, computing a data address to the data memory for the asset from the address and position in the keys memory at which an entry has been found or allocated for the asset; reading data in the data memory at the computed data address; and executing the input command based on the data read in the data memory at the data address.
    Type: Grant
    Filed: September 30, 2014
    Date of Patent: January 3, 2017
    Assignee: ENYX SA
    Inventor: Edward Kodde
  • Patent number: 9430384
    Abstract: Instructions and logic provide advanced paging capabilities for secure enclave page caches. Embodiments include multiple hardware threads or processing cores, a cache to store secure data for a shared page address allocated to a secure enclave accessible by the hardware threads. A decode stage decodes a first instruction specifying said shared page address as an operand, and execution units mark an entry corresponding to an enclave page cache mapping for the shared page address to block creation of a new translation for either of said first or second hardware threads to access the shared page. A second instruction is decoded for execution, the second instruction specifying said secure enclave as an operand, and execution units record hardware threads currently accessing secure data in the enclave page cache corresponding to the secure enclave, and decrement the recorded number of hardware threads when any of the hardware threads exits the secure enclave.
    Type: Grant
    Filed: March 31, 2013
    Date of Patent: August 30, 2016
    Assignee: Intel Corporation
    Inventors: Carlos V Rozas, Ilya Alexandrovich, Ittai Anati, Alex Berenzon, Michael A Goldsmith, Barry E Huntley, Anton Ivanov, Simon P Johnson, Rebekah M. Leslie-Hurd, Francis X. McKeen, Gilbert Neiger, Rinat Rappoport, Scott Dion Rodgers, Uday R. Savagaonkar, Vincent R. Scarlata, Vedvyas Shanbhogue, Wesley H Smith, William Colin Wood
  • Patent number: 9075599
    Abstract: Due to the ever expanding number of registers and new instructions in modern microprocessor cores, the address widths present in the instruction encoding continue to widen, and fewer instruction opcodes are available, making it more difficult to add new instructions to existing architectures without resorting to inelegant tricks that have drawbacks such as source destructive operations. The disclosed invention utilizes specialized decode and address calculation hardware that concatenates a fixed number of least significant bits of the instruction address onto the most significant side of each register address portion contained in the instruction, yielding the full register address, instead of providing the full register address widths for every register used in the instruction. This frees up valuable opcode space for other instructions and avoids compiler complexity. This aligns nicely with how most loops are unrolled in assembly language, where independent operations are near each other in memory.
    Type: Grant
    Filed: September 30, 2010
    Date of Patent: July 7, 2015
    Assignee: International Business Machines Corporation
    Inventors: Mark J. Hickey, Adam J. Muff, Matthew R. Tubbs, Charles D. Wait
  • Publication number: 20150095613
    Abstract: An asset management method implemented on an integrated circuit uses a keys memory storing keys, each key being associated with an asset identifier, and a data memory storing asset information. The method comprises: receiving an input command for an asset comprising an asset identifier and asset information, computing addresses to Keys memory from the asset identifier, the computing addresses comprising calculating hashes from the asset identifier, finding or allocating an entry in keys memory for the asset, based on the computed set of addresses, depending on the input command, computing a data address to the data memory for the asset from the address and position in the keys memory at which an entry has been found or allocated for the asset; reading data in the data memory at the computed data address; and executing the input command based on the data read in the data memory at the data address.
    Type: Application
    Filed: September 30, 2014
    Publication date: April 2, 2015
    Inventor: Edward KODDE
  • Patent number: 8990166
    Abstract: A data size characteristic of contents of a related unit of data to be written to a storage by an input/output module of a data storage application can be determined, and a storage page size consistent with the data size can be selected from a plurality of storage page sizes. The related unit of data can be assigned to a storage page having the selected storage page size, and the storage page can be passed to the input/output module so that the input/output module physically clusters the contents of the related unit of data when the input/output module writes the contents of the related unit of data to the storage. Related methods, systems, and articles of manufacture are also disclosed.
    Type: Grant
    Filed: March 25, 2011
    Date of Patent: March 24, 2015
    Assignee: SAP SE
    Inventors: Dirk Thomsen, Axel Schroeder, Ivan Schreter
  • Patent number: 8966180
    Abstract: A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.
    Type: Grant
    Filed: March 1, 2013
    Date of Patent: February 24, 2015
    Assignee: Intel Corporation
    Inventors: Daehyun Kim, Christopher J. Hughes, Yen-Kuang Chen, Partha Kundu
  • Patent number: 8954674
    Abstract: A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.
    Type: Grant
    Filed: October 8, 2013
    Date of Patent: February 10, 2015
    Assignee: Intel Corporation
    Inventors: Daehyun Kim, Christopher J. Hughes, Yen-Kuang Chen, Partha Kundu
  • Patent number: 8943293
    Abstract: A method includes receiving an address at a tag state array of a cache, wherein the cache is configurable to have a first size and a second size that is smaller than the first size. The method further includes identifying a first portion of the address as a set index, wherein the first portion has a same number of bits when the cache has the first size as when the cache has the second size. The method further includes using the set index to locate at least one tag field of the tag state array, identifying a second portion of the address to compare to a value stored at the at least one tag field, locating at least one state field of the tag state array that is associated with a particular tag field that matches the second portion, identifying a cache line based on a comparison of a third portion of the address to at least one status bit of the at least one state field when the cache has the second size, and retrieving the cache line.
    Type: Grant
    Filed: March 19, 2014
    Date of Patent: January 27, 2015
    Assignee: QUALCOMM Incorporated
    Inventors: Christopher Edward Koob, Ajay Anant Ingle, Lucian Codrescu, Jian Shen
  • Patent number: 8918559
    Abstract: Partitioning of a variable length scatter gather list including a processor for performing a method that includes requesting data from an I/O device comprising an I/O buffer. The requesting includes initiating a subchannel. The method further includes determining whether the subchannel supports data divisions by requesting SSQD data from the I/O device and inspecting at least one bit in the SSQD data. A determination is made whether the requested data includes a metadata block in response to determining that the subchannel support data divisions. Also, the subchannel is notified that the requested data includes the metadata block in response to determining that the requested data includes the metadata block. A location of storage is identified in an SBAL in response to notifying the subchannel.
    Type: Grant
    Filed: June 6, 2011
    Date of Patent: December 23, 2014
    Assignee: International Business Machines Corporation
    Inventors: Stefan Amann, Gerhard Banzhaf, Ralph Friedrich, Raymond M. Higgs, George P. Kuch, Bruce H. Ratcliff
  • Patent number: 8898439
    Abstract: A serial flash memory and an address transmission method thereof. The serial flash memory selectively addresses a first memory space according to a first address length or addresses a second memory space according to a second address length longer than the first address length. If the first memory space is addressed according to the first address length, a first memory address is completely received within an address time duration so that data corresponding to the first memory address is initially outputted from a starting clock. In the address transmission method, if the second memory space is addressed according to the second address length, a portion of a second memory address is received within the address time duration. The other portion of the second memory address is received within a waiting time duration so that data corresponding to the second memory address is initially outputted from the starting clock.
    Type: Grant
    Filed: July 16, 2010
    Date of Patent: November 25, 2014
    Assignee: Macronix International Co., Ltd.
    Inventors: Kuen-Long Chang, Yufe-Feng Lin, Chun-Hsiung Hung
  • Patent number: 8806132
    Abstract: An information processing device according to the present invention includes an operation unit that outputs an access request, a storage unit including a plurality of connection ports and a plurality of memories capable of a simultaneous parallel process that has an access unit of a plurality of word lengths for the connection ports, and a memory access control unit that distributes a plurality access addresses corresponding to the access request received for each processing cycle from the operation unit, and generates an address in a port including a discontinuous word by one access unit for each of the connection ports.
    Type: Grant
    Filed: January 18, 2012
    Date of Patent: August 12, 2014
    Assignee: NEC Corporation
    Inventor: Yasuhiro Nishigaki
  • Patent number: 8806439
    Abstract: A system having a processor receiving a copy of a program and modifying the copy to create a modified program and a memory including a memory stack, the modified program being stored in the memory stack, wherein a first image of the memory stack storing the modified program is different from a second image of the memory stack storing the copy of the program.
    Type: Grant
    Filed: April 30, 2007
    Date of Patent: August 12, 2014
    Assignee: AT & T Intellectual Property II, LP
    Inventor: Michael L. Asher
  • Patent number: 8719503
    Abstract: A method includes receiving an address at a tag state array of a cache. The cache is configurable to have a first size or a second size that is larger than the first size. The method includes identifying a first portion of the address as a set index and using the set index to locate at least one tag field of the tag state array. The method also includes identifying a second portion of the address to compare to a value stored at the at least one tag field and locating at least one state field of the tag state array associated with a particular tag field that matches the second portion. The method further includes identifying a cache line based on a comparison of a third portion of the address to at least two status bits of the at least one state field and retrieving the cache line.
    Type: Grant
    Filed: June 25, 2012
    Date of Patent: May 6, 2014
    Assignee: QUALCOMM Incorporated
    Inventors: Christopher Edward Koob, Ajay Anant Ingle, Lucian Codrescu, Jian Shen
  • Publication number: 20140089632
    Abstract: Divisions by numbers that are not divisible by two (2) can be performed in a computing system based on a summation that estimates and/or approximates the reciprocal of the dividing number or denominator value. By way of example, dividing by three (3) can be calculated based on a summation that approximates or estimates one third (?) represented as the sum of a selected group of the inverses of the powers of two (2) in a pattern, namely the sum of: ¼, 1/16, 1/64, 1/256, . . . ). Applications of the division techniques are virtually unlimited and include memory mapping of global memory addresses to memory channel addresses by dividing a global memory address into the number of memory channels, allowing memory mapping to be performed in an efficient manner even for large memory spaces using a number of memory channels that are not divisible by two, including prime numbers.
    Type: Application
    Filed: September 25, 2012
    Publication date: March 27, 2014
    Inventor: Jeremy Branscome
  • Patent number: 8677100
    Abstract: An integrated circuit memory device has a memory array and control logic with at least a first addressing mode in which the instruction includes a first instruction code and an address of a first length; and a second addressing mode in which the instruction includes the first instruction code and an address of a second length. The first length of the address is different from the second length of the address.
    Type: Grant
    Filed: June 10, 2010
    Date of Patent: March 18, 2014
    Assignee: Macronix International Co., Ltd.
    Inventors: Yulan Kuo, Kuen-Long Chang, Chun-Hsiung Hung
  • Patent number: 8656139
    Abstract: A digital processor stores pointers of different sizes in memory. The processor, specifically, executes instructions to store a long or short pointer. Long pointers reference any address in the memory's logical address space, while short pointers merely reference any address in a subset of that space. However, short pointers are smaller in size as stored in memory than long pointers. Long pointers thus support relatively large address range capabilities, while short pointers use less memory. The processor also executes instructions to load a long or short pointer into the register file, and does so in a way that does not require the processor to distinguish between the different pointers when executing other instructions. Specifically, the processor converts long and short pointers into a common format for loading into the register file, and converts pointers in the common format back into long or short pointers for storing in the memory.
    Type: Grant
    Filed: March 11, 2011
    Date of Patent: February 18, 2014
    Assignee: Telefonaktiebolaget L M Ericsson (publ)
    Inventors: Stephan Meier, John G. Favor, Evan Gewirtz, Robert Hathaway, Eric Trehus
  • Patent number: RE45486
    Abstract: The present invention relates to a method for addressing the memory locations of a memory card. There are several memory locations in a memory card for storing data, in which case in order to address a specific memory location an address is formed. At least one parameter is stored in the memory card, on the basis of which parameter the number of memory locations of a memory card can be calculated, and a specific number of bits is reserved for said at least one parameter. In the method, two or more memory locations are addressed with one address, and/or the number of bits that can be used in an address is increased. The invention also relates to a system and a memory card in which the method is applied.
    Type: Grant
    Filed: May 24, 2013
    Date of Patent: April 21, 2015
    Assignee: Memory Technologies LLC
    Inventors: Marko Ahvenainen, Kimmo Mylly