Shift Register Memory Patents (Class 711/109)
-
Patent number: 11736107Abstract: A field-programmable gate array (FPGA) for using a configuration shift chain to implement a multi-bitstream function includes a bitstream control circuit, a multi-bitstream configuration shift chain and a configurable module. The FPGA enables multi-bitstream storage configuration bits to latch configuration bitstreams by adjusting a circuit structure of a multi-bitstream configuration shift chain in a combination of a control logic of a bitstream control circuit for the multi-bitstream configuration shift chain, and outputs one latched configuration bitstream from a configuration output terminal to a configurable module through each multi-bitstream storage configuration bit as required, so that the configurable module implements a logic function corresponding to the configuration bitstream outputted by the multi-bitstream configuration shift chain.Type: GrantFiled: December 22, 2021Date of Patent: August 22, 2023Assignee: WUXI ESIONTECH CO., LTD.Inventors: Yueer Shan, Yanfeng Xu, Xiaofei He, Jicong Fan
-
Patent number: 11347853Abstract: A combination of hardware monitoring and binary translation software allow detection of return-oriented programming (ROP) exploits with low overhead and low false positive rates. Embodiments may use various forms of hardware to detect ROP exploits and indicate the presence of an anomaly to a device driver, which may collect data and pass the indication of the anomaly to the binary translation software to instrument the application code and determine whether an ROP exploit has been detected. Upon detection of the ROP exploit, the binary translation software may indicate the ROP exploit to an anti-malware software, which may take further remedial action as desired.Type: GrantFiled: September 16, 2019Date of Patent: May 31, 2022Assignee: MCAFEE, LLCInventors: Palanivelrajan Rajan Shanmugavelayutham, Koichi Yamada, Vadim Sukhomlinov, Igor Muttik, Oleksandr Bazhaniuk, Yuriy Bulygin, Dmitri Dima Rubakha, Jennifer Eligius Mankin, Carl D. Woodward, Sevin F. Varoglu, Dima Mirkin, Alex Nayshtut
-
Patent number: 11256568Abstract: The present invention facilitates efficient and effective utilization of storage management features. In one embodiment, a memory device comprises a memory interface, an ECC generation component, and storage components. The memory interface is configured to receive an access request to an address at which data is stored. The memory interface can also forward responses to the request including the data and ECC information associated with the data. The ECC generation component is configured to automatically establish an address at which the ECC information is stored based upon the receipt of the access request to an address at which data is stored. In one exemplary implementation, the internal establishment of the address at which the ECC information is stored is automatic. The storage components are configured to store the information.Type: GrantFiled: November 25, 2019Date of Patent: February 22, 2022Assignee: Nvidia CorporationInventors: Bruce Lam, Alok Gupta, David G. Reed, Barry Wagner
-
Patent number: 11194720Abstract: Systems and methods for reducing input/output operations in a computing system that uses a cache. Input/output operations associated with cache index lookups are reduced by tracking the location of the requested data such that the data can be invalidated without having to access the cache index. Input/output operations can be reduced by invalidating the entry in the cache index when reading the corresponding data.Type: GrantFiled: September 24, 2019Date of Patent: December 7, 2021Assignee: EMC IP HOLDING COMPANY LLCInventors: Philip N. Shilane, Grant R. Wallace
-
Patent number: 11171983Abstract: Embodiments are directed toward techniques to detect a first function associated with an address space initiating a call instruction to a second function in the address space, the first function to call the second function in a deprivileged mode of operation, and define accessible address ranges for segments of the address space for the second function, each segment to a have a different address range in the address space where the second function is permitted to access in the deprivileged mode of operation, Embodiments include switching to the stack associated with the second address space and the second function, and initiating execution of the second function in the deprivileged mode of operation.Type: GrantFiled: June 29, 2018Date of Patent: November 9, 2021Assignee: INTEL CORPORATIONInventors: Vadim Sukhomlinov, Kshitij Doshi, Michael Lemay, Dmitry Babokin, Areg Melik-Adamyan
-
Patent number: 11030104Abstract: Provided are a computer program product, system, and method for queuing prestage requests in one of a plurality of prestage request queues as a function of the number of track holes determined to be present in a track cached in a multi-tier cache. A prestage request when executed prestages read data from storage to a slow cache tier of the multi-tier cache, for one or more sectors identified by one or more track holes. In another aspect, allocated tasks are dispatched to execute prestage requests queued on selected prestage request queues as a function of priority associated with each prestage request queues. Other aspects and advantages are provided, depending upon the particular application.Type: GrantFiled: January 21, 2020Date of Patent: June 8, 2021Assignee: International Business Machines CorporationInventors: Lokesh Mohan Gupta, Kevin J. Ash, Kyler A. Anderson, Matthew G. Borlick
-
Patent number: 11012333Abstract: A network device including access and test ports, an interface, and first and second controllers. The interface receives an Ethernet frame transmitted over an Ethernet network to the network device. The Ethernet frame includes bits for testing or debugging the memory-mapped device and is received at the interface based on an output of a host device. The first controller converts the Ethernet frame to a first access frame. The test port receives a diagnostic signal transmitted from the host device to the network device. The second controller converts the diagnostic signal to a second access frame and controls passage of the access frames to the memory-mapped device via the access port. The first controller tests or debugs the memory-mapped device based on data received from a register of the memory-mapped device. The data is written in the register based on the first and second access frames.Type: GrantFiled: January 28, 2020Date of Patent: May 18, 2021Assignee: Marvell Asia Pte, Ltd.Inventor: Thomas Kniplitsch
-
Patent number: 10805420Abstract: A method, system, and computer-usable medium are disclosed for network acceleration, comprising: responsive to receiving at an acceleration device a stream of one or more datagrams from a sending endpoint device within a first local area network of the acceleration device, the stream for transmission to a receiving endpoint device within a second local area network coupled to the first local area network by a wide area network: communicating by the acceleration device to the sending endpoint device a respective acknowledgement to each of the one or more datagrams; and transmitting by the acceleration device the one or more datagrams via multiple communication links of the wide area network to a second acceleration device within the second local area network and coupled to the receiving endpoint device.Type: GrantFiled: November 29, 2017Date of Patent: October 13, 2020Assignee: Forcepoint LLCInventors: Tuomo Syvänne, Olli-Pekka Niemi, Valtteri Rahkonen, Ville Mattila
-
Patent number: 10623290Abstract: A network device is provided and operative to secure remote access to an internal component including a processor and/or a register. The network device includes an Ethernet interface, an access port, and a controller. The Ethernet interface receives, from a host device, frames transmitted over an Ethernet network. The access port is physically connected to the internal component and physically inaccessible to the host device. The controller is physically connected to the access port. The controller: accesses the internal component via the access port; based on the frames, determines whether the host device is authorized; if the host device is not authorized, prevent the host device from accessing the processor or the register; and if the host device is authorized, permit the host device, via the Ethernet interface and the access port, to control operation of the processor or change the contents of the register.Type: GrantFiled: October 1, 2018Date of Patent: April 14, 2020Assignee: Marvell World Trade Ltd.Inventor: Thomas Kniplitsch
-
Patent number: 10452268Abstract: Embodiments of the invention provide systems and methods to implement an object memory fabric. Object memory modules may include object storage storing memory objects, memory object meta-data, and a memory module object directory. Each memory object and/or memory object portion may be created natively within the object memory module and may be a managed at a memory layer. The memory module object directory may index all memory objects and/or portions within the object memory module. A hierarchy of object routers may communicatively couple the object memory modules. Each object router may maintain an object cache state for the memory objects and/or portions contained in object memory modules below the object router in the hierarchy. The hierarchy, based on the object cache state, may behave in aggregate as a single object directory communicatively coupled to all object memory modules and to process requests based on the object cache state.Type: GrantFiled: March 28, 2018Date of Patent: October 22, 2019Assignee: Ultrata, LLCInventors: Steven J Frank, Larry Reback
-
Patent number: 10372545Abstract: A microcontroller system includes a processing unit, a first peripheral having a first set of registers, and an assurance module. The first peripheral is configured to receive a first reset signal that resets the first set of registers to a first actual reset value, which is expected to be a first expected value. The assurance module is configured to calculate a first signature value, which is based on the first actual reset value, in response to the first reset signal. The processing unit is configured to perform a first comparison between the calculated first signature value and a pre-determined first signature value to determine whether the microcontroller system is in a trusted safety state. The first comparison is performed in response to the first reset signal, and the pre-determined first signature value is based on the first expected value.Type: GrantFiled: March 13, 2017Date of Patent: August 6, 2019Assignee: Infineon Technologies AGInventors: Boyko Traykov, Veit Kleeberger, Rafael Zalman
-
Patent number: 10331447Abstract: Providing efficient recursion handling using compressed return address stacks (CRASs) in processor-based systems is disclosed. In one aspect, a processor-based system provides a branch prediction circuit including a CRAS. Each CRAS entry within the CRAS includes an address field and a counter field. When a call instruction is encountered, a return address of the call instruction is compared to the address field of a top CRAS entry indicated by a CRAS top-of-stack (TOS) index. If the return address matches the top CRAS entry, the counter field of the top CRAS entry is incremented instead of adding a new CRAS entry for the return address. When a return instruction is subsequently encountered in the instruction stream, the counter field of the top CRAS entry is decremented if its value is greater than zero (0), or, if not, the top CRAS entry is removed from the CRAS.Type: GrantFiled: August 30, 2017Date of Patent: June 25, 2019Assignee: QUALCOMM IncorporatedInventors: Vignyan Reddy Kothinti Naresh, Anil Krishna
-
Patent number: 10248350Abstract: Embodiments of the present invention disclose a queue management method. The method includes writing a PD queue to a DRAM, where the PD queue includes multiple PDs, and the multiple PDs correspond one-to-one to multiple packets included in a first packet queue. The method also includes writing at least one PD in the PD queue to an SRAM, where the at least one PD includes a queue head of the PD queue. Correspondingly, the embodiments of the present invention further disclose a queue management apparatus.Type: GrantFiled: February 6, 2017Date of Patent: April 2, 2019Assignee: Huawei Technologies Co., LtdInventors: Yuchun Lu, Jian Zhang
-
Patent number: 10235203Abstract: An improved technique involves processing a workflow in stages, and processing all requests in a queue for a given stage before moving onto the next stage. Along these lines, each request received by a storage processor is assigned to a core and placed in a first queue for that core. Within that core, a single system thread executes first instructions for a task, e.g., checking the storage cache for the requested data from a request, and then transfers the request to a second queue. Rather than perform additional tasks to completely satisfy the request, however, the thread executes the first instructions for a prespecified number of requests in the first queue. Only when the thread has executed instructions for the prespecified number of requests, the thread begins execution of second instructions for requests in the second queue, and work on the next task begins.Type: GrantFiled: March 31, 2014Date of Patent: March 19, 2019Assignee: EMC IP Holding Company LLCInventors: Daniel Cummins, David W. Harvey, Steve Morley
-
Patent number: 10198293Abstract: According to a general aspect, a method may include receiving a computing task, wherein the computing task includes a plurality of operations. The method may include allocating the computing task to a data node, wherein the data node includes at least one host processor and an intelligent storage medium, wherein the intelligent storage medium comprises at least one controller processor, and a non-volatile memory, wherein each data node includes at least three processors between the at least one host processor and the at least one controller processor. The method may include dividing the computing task into at least a first chain of operations and a second chain of operations. The method may include assigning the first chain of operations to the intelligent storage medium of the data node. The method may further include assigning the second chain of operations to the central processor of the data node.Type: GrantFiled: March 17, 2017Date of Patent: February 5, 2019Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Yang Seok Ki, Jaehwan Lee
-
Patent number: 10198578Abstract: The subject disclosure is directed towards using one or more of hardware, a hypervisor, and privileged mode code to prevent system mode code from accessing user mode data and/or running user mode code at the system privilege level, or vice-versa. Also described is (in systems with a hypervisor) preventing non-hypervisor code from running in hypervisor mode or accessing hypervisor-only data, or vice-versa. A register maintained by hardware, hypervisor, or system mode code contains data access and execution polices for different chunks of addressable space with respect to which requesting entities (hypervisor mode code, system mode code, user mode code) have access to or can execute code in a given chunk. When a request to execute code or access data with respect to an address is received, the request is processed to determine to which chunk the address corresponds. The policy for that chunk is evaluated to determine whether to allow or deny the request.Type: GrantFiled: December 5, 2016Date of Patent: February 5, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Jonathan E. Lange, John V. Sell, Ling Tony Chen, Eric O. Mejdrich
-
Patent number: 10193927Abstract: Systems and methods for relocating executable instructions to arbitrary locations are described, in which the relocation of the instructions may be arbitrary or random, and may operate on groups of instructions or individual instructions. Such relocation may be achieved through hardware or software, and may use a virtual machine, software dynamic translators, interpreters, or emulators. Instruction relocation may use or produce a specification governing how to relocate the desired instructions. Randomizing the location of instructions provides defenses against a variety of security attacks. Such systems and methods may provide many advantages over other instruction relocation techniques, such as low runtime overhead, no required user interaction, applicability post-deployment, and the ability to operate on arbitrary executable programs.Type: GrantFiled: February 27, 2013Date of Patent: January 29, 2019Assignee: University of Virginia Patent FoundationInventors: Jason D. Hiser, Anh Nguyen-Tuong, Michele Co, Jack W. Davidson
-
Patent number: 10176102Abstract: Systems and methods for a content addressable cache that is optimized for SSD use are disclosed. In some embodiments, the cache utilizes an identifier array where identification information is stored for each entry in the cache. However, the size of the bit field used for the identification information is not sufficient to uniquely identify the data stored at the associated entry in the cache. A smaller bit field increases the likelihood of a “false positive”, where the identification information indicates a cache hit when the actual data does not match the digest. A larger bit field decreases the probability of a “false positive”, at the expense of increased metadata memory space. Thus, the architecture allows for a compromise between metadata memory size and processing cycles.Type: GrantFiled: March 30, 2016Date of Patent: January 8, 2019Assignee: Infinio Systems, Inc.Inventors: David W. Harvey, Scott H. Davis, Martin Charles Martin, Vishal Misra, Hooman Vassef
-
Patent number: 10126387Abstract: A magnetic resonance apparatus has a computer designed to emit, on each of at least one control channel, a control signal generated from a sequence of values to control, via each channel, at least one component of the magnetic resonance apparatus. A buffer device is provided for each channel, which includes at least one read control device and a control buffer memory for that control channel. A software tool running on the computer is designed to determine the values as a function of predefined parameters and write them to the control buffer memory. The read control device is designed to read values from the control buffer memory and to supply them, with a read clock derived from a clock that was set independently of the computer, to the respective component. For each of the control buffer memories, the order in which the values are read corresponds to the order in which they are written.Type: GrantFiled: June 11, 2015Date of Patent: November 13, 2018Assignee: Siemens AktiengesellschaftInventor: Georg Pirkl
-
Patent number: 10091079Abstract: A chipset including one or more system-on-chips. The chipset includes a memory-mapped device, an Ethernet interface, and a remote management controller. The memory-mapped device includes a test access port and is configured to access a register based on an address of a memory corresponding to the register. The Ethernet interface is configured to receive Ethernet frames transmitted over an Ethernet network. One or more of the Ethernet frames are received from a host device. The one or more of the Ethernet frames are received to test the one or more system-on-chips. The remote management controller is coupled to the test access port. The remote management controller is configured to, based on the one or more of the Ethernet frames, remotely control operation of the memory-mapped device or another device in the one or more system-on-chips, and restrict (a) testing of the one or more system-on-chips or the memory-mapped device, and (b) access by the host device to the register.Type: GrantFiled: May 5, 2016Date of Patent: October 2, 2018Assignee: Marvell World Trade Ltd.Inventor: Thomas Kniplitsch
-
Patent number: 9965501Abstract: Techniques are provided for more efficiently using the bandwidth of the I/O path between a CPU and volatile memory during the performance of database operation. Relational data from a relational table is stored in volatile memory as column vectors, where each column vector contains values for a particular column of the table. A binary-comparable format may be used to represent each value within a column vector, regardless of the data type associated with the column. The column vectors may be compressed and/or encoded while in volatile memory, and decompressed/decoded on-the-fly within the CPU. Alternatively, the CPU may be designed to perform operations directly on the compressed and/or encoded column vector data. In addition, techniques are described that enable the CPU to perform vector processing operations on the column vector values.Type: GrantFiled: December 1, 2015Date of Patent: May 8, 2018Assignee: ORACLE INTERNATIONAL CORPORATIONInventors: Lawrence J. Ellison, Amit Ganesh, Vineet Marwah, Jesse Kamp, Anindya C. Patthak, Shasank K. Chavan, Michael J. Gleeson, Allison L. Holloway, Manosiz Bhattacharyya
-
Patent number: 9639356Abstract: An example method of updating an output data vector includes identifying a data value vector including element data values. The method also includes identifying an address value vector including a set of elements. The method further includes applying a conditional operator to each element of the set of elements in the address value vector. The method also includes for each element data value in the data value vector, determining whether to update an output data vector based on applying the conditional operator.Type: GrantFiled: March 15, 2013Date of Patent: May 2, 2017Assignee: Qualcomm IncorporatedInventors: Marc M. Hoffman, Ajay Anant Ingle, Jose Fridman
-
Patent number: 9632831Abstract: According to one general aspect, a scheduler computing device may include a computing task memory configured to store at least one computing task. The computing task may be executed by a data node of a distributed computing system, wherein the distributed computing system includes at least one data node, each data node having a central processor and an intelligent storage medium, wherein the intelligent storage medium comprises a controller processor and a memory. The scheduler computing device may include a processor configured to assign the computing task to be executed by either the central processor of a data node or the intelligent storage medium of the data node, based, at least in part, upon an amount of data associated with the computing task.Type: GrantFiled: March 19, 2015Date of Patent: April 25, 2017Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Jaehwan Lee, Yang Seok Ki
-
Patent number: 9589623Abstract: Word shift static random access memory (WS-SRAM) cell, word shift static random access memory (WS-SRAM) and method using the same employ dynamic storage mode switching to shift data. The WS-SRAM cell includes a static random access memory (SRAM) cell having a pair of cross-coupled elements to store data, a dynamic/static (D/S) mode selector to selectably switch the WS-SRAM cell between the dynamic storage mode and a static storage mode, and a column selector to selectably determine whether or not the WS-SRAM cell accepts shifted data. The WS-SRAM includes a plurality of WS-SRAM cells arranged in an array and a controller to shift data. The method includes switching a storage mode and activating a column selector of, coupling data from an adjacent memory cell to, and storing the coupled data in, a selected WS-SRAM cell.Type: GrantFiled: January 30, 2012Date of Patent: March 7, 2017Assignee: Hewlett Packard Enterprise Development LPInventors: Frederick A. Perner, Matthew D. Pickett
-
Patent number: 9466352Abstract: Dynamic/static random access memory (D/SRAM) cell, block shift static random access memory (BS-SRAM) and method using the same employ dynamic storage mode and dynamic storage mode switching to shift data. The D/SRAM cell includes a static random access memory (SRAM) cell having a pair of cross-coupled elements to store data, and a dynamic/static (D/S) mode selector to selectably switch the D/SRAM cell between the dynamic storage mode and a static storage mode. The BS-SRAM includes a plurality of D/SRAM cells arranged in an array and a controller to shift data from an adjacent D/SRAM cell in a second row of the array to a D/SRAM cell in a first row. The method includes switching the mode of, coupling data from an adjacent memory cell to, and storing the coupled data in, a selected D/SRAM cell.Type: GrantFiled: January 30, 2012Date of Patent: October 11, 2016Assignee: Hewlett Packard Enterprise Development LPInventor: Frederick A. Perner
-
Patent number: 9405519Abstract: A method and a system for register clearing in data flow analysis in decompilation are provided. The method includes: reading all function statements in a code file; sequentially judging each of the read function statements, and creating a binary tree and inputting the function statement into the binary tree in a case that the function statement includes a register name; sequentially judging each of the function statements including the register name, and performing an elimination process on the created binary tree to remove the register name from the binary tree in a case that the function statement includes a right child end tag of the binary tree, to generate a simplest binary tree; and generating a function statement in high-level language based on the simplest binary tree. All function statements can be read at a time and multiple reading and writing are avoided in the invention.Type: GrantFiled: November 23, 2012Date of Patent: August 2, 2016Assignees: Electric Power Research Institute of State Grid Zhejiang Electric Power Company, State Grid Corporation of ChinaInventors: Jiong Zhu, Li Yao, Shaoteng Li, Yi Lou, Yingjun Hu, Xing Wu
-
Patent number: 9384797Abstract: A memory control method includes assigning based on a table to which an allocated device that executes a first process in a first application is registered, the first process in the first application to the allocated device registered; notifying a port connector of identification information of a port of memory, the port to be used by the first application, and registering a number of the port into the table; and allocating a storage area to the port and registering an address of the storage area into the table.Type: GrantFiled: July 24, 2013Date of Patent: July 5, 2016Assignee: FUJITSU LIMITEDInventors: Hiromasa Yamauchi, Koichiro Yamashita, Takahisa Suzuki, Koji Kurihara, Toshiya Otomo
-
Patent number: 9081514Abstract: The operation of a FIFO buffer memory includes writing the data at input to the memory in a single write location, and making the single write location available for writing an input datum with a shift of the datum written in the single write location to another location of the memory. At each operation of writing of an input datum in the single write location, there is scheduled shifting of the datum written therein to another location, without waiting for a new write request, thus eliminating the combinational constraint between the two operations.Type: GrantFiled: December 6, 2011Date of Patent: July 14, 2015Assignee: STMICROELECTRONICS S.R.L.Inventors: Mirko Dondini, Cristina Nastasi
-
Publication number: 20150134898Abstract: Systems and methods are provided for managing access to registers. In one embodiment, a system may include a processor and a plurality of registers. The processor and the plurality of registers may be integrated into a single device, or may be in separate devices. The plurality of registers may include a first set of registers that are directly accessible by the processor, and a second set of registers that are not directly accessible by the processor. The second set of registers may, however, be accessed indirectly by the processor via the first set of registers. In one embodiment, the first set of registers may include a register for selecting a register bank from the second set of registers, and a register for selecting a particular address within the register bank, to allow indirect access by the processor to the registers of the second set.Type: ApplicationFiled: January 19, 2015Publication date: May 14, 2015Inventors: Harold B. Noyes, Mark Jurenka, Gavin Huggins
-
Publication number: 20150113215Abstract: Provided are a device and computer readable storage medium for programming a memory module to initiate a training mode in which the memory module transmits continuous bit patterns on a side band lane of the bus interface; receiving the bit patterns over the bus interface; determining from the received bit patterns a transition of values in the bit pattern to determine a data eye between the determined transitions of the values; and determining a setting to control a phase interpolator to generate interpolated signals used to sample data within the determined data eye.Type: ApplicationFiled: December 23, 2014Publication date: April 23, 2015Inventors: Tonia G. MORRIS, Jonathan C. JASPER, Arnaud J. FORESTIER
-
Publication number: 20150095565Abstract: Provided are a device and computer readable storage medium for programming a memory module to initiate a training mode in which the memory module transmits continuous bit patterns on a side band lane of the bus interface; receiving the bit patterns over the bus interface; determining from the received bit patterns a transition of values in the bit pattern to determine a data eye between the determined transitions of the values; and determining a setting to control a phase interpolator to generate interpolated signals used to sample data within the determined data eye.Type: ApplicationFiled: September 27, 2013Publication date: April 2, 2015Inventors: Tonia G. MORRIS, Jonathan C. JASPER, Arnaud J. FORESTIER
-
Patent number: 8966168Abstract: An information memory system in which data received is divided into pieces of data, which are stored in memories in parallel, includes controller configured to storing a number of the divided pieces of data and monitoring a read request and a buffer full notice, in a case where the number of read requests does not reach the number of valid memory units and the buffer full notice continues in all buffers except for one buffer which does not output the read request, performing a read control corresponding to the buffers which output the buffer full notice, and performing control of the integration of a piece of data reconstructed, after being read from the memory unit corresponding to the buffer which does not output the read request and the pieces of data read from the memory units corresponding to the buffers which output the buffer full notice.Type: GrantFiled: January 23, 2013Date of Patent: February 24, 2015Assignee: Kabushiki Kaisha ToshibaInventor: Yuichiro Hanafusa
-
Publication number: 20150046644Abstract: Shiftable memory that supports defragmentation includes a memory having built-in shifting capability, and a memory defragmenter to shift a page of data representing a contiguous subset of data stored in the memory from a first location to a second location within the memory to be adjacent to another page of stored data. A method of memory defragmentation includes defining an array in memory cells of the shiftable memory and performing a memory defragmentation using the built-in shifting capability of the shiftable memory to shift a data page stored in the array.Type: ApplicationFiled: March 2, 2012Publication date: February 12, 2015Inventor: Alan H. Karp
-
Patent number: 8954687Abstract: A computer system includes a controller coupled to a plurality of memory modules each of which includes a memory hub and a plurality of memory devices. The memory hub includes a row cache memory that stores data as they are read from the memory devices. When the memory module is not being accessed by the controller, a sequencer in the memory module generates requests to read data from a row of memory cells. The data read responsive to the generated read requests are also stored in the row cache memory. As a result, read data in the row being accessed may be stored in the row cache memory even though the data was not previously read from the memory device responsive to a memory request from the controller.Type: GrantFiled: May 27, 2005Date of Patent: February 10, 2015Assignee: Micron Technology, Inc.Inventor: Joseph M. Jeddeloh
-
Publication number: 20150032952Abstract: A clock control circuit for a parallel in, serial out (PISO) shift register helps save power. The clock control circuit selectively clocks the shift register as it converts a parallel input to a serial output. For example, the clock control circuit may provide clock signals to the flip flops (or other buffers) in the shift register that will receive data elements provided with the parallel input. However, the clock control circuit withholds clock signals from flip flops that will not receive data elements provided with the parallel input, or that have already been received by a particular flip flop. As the parallel loaded input elements propagate serially through the shift register, on each clock cycle an additional memory no longer needs to be clocked. The memory no longer needs to be clocked because that memory has already propagated its loaded input element to the following memory, and no further element provided in the N element parallel loaded data is incoming.Type: ApplicationFiled: August 8, 2013Publication date: January 29, 2015Applicant: Broadcom CorporationInventor: Jeffrey Allan Riley
-
Publication number: 20150006809Abstract: A shiftable memory supporting bimodal data storage includes a memory having built-in shifting capability to shift a contiguous subset of data stored in the memory from a first location to a second location within the memory. The shiftable memory further includes a bimodal data storage operator to operate on a data structure comprising the contiguous subset of data words and to provide in-place insertion of a data value using the built-in shifting capability.Type: ApplicationFiled: March 2, 2012Publication date: January 1, 2015Inventors: Stavros Harizopoulos, Alkiviadis Simitsis, Harumi Kuno
-
Patent number: 8874837Abstract: An integrated circuit can include a programmable circuitry operable according to a first clock frequency and a block random access memory. The block random access memory can include a random access memory (RAM) element having at least one data port and a memory processor coupled to the data port of the RAM element and to the programmable circuitry. The memory processor can be operable according to a second clock frequency that is higher than the first clock frequency. Further, the memory processor can be hardwired and dedicated to perform operations in the RAM element of the block random access memory.Type: GrantFiled: November 8, 2011Date of Patent: October 28, 2014Assignee: Xilinx, Inc.Inventors: Christopher E. Neely, Gordon J. Brebner
-
Publication number: 20140317345Abstract: A method, computer readable medium, and apparatus for implementing a compression are disclosed. For example, the method receives a first portion of an input data at a first register, determines a first address based upon the first portion of the input data, reads the first address in a memory to determine if a value stored in the first address is zero, stores a code for the first address of the memory in the first register if the value of the first address is zero, receives a second portion of the input data at a second register, determines a second address based upon the second portion of the input data in the memory, obtains the code from the first register if the second address and the first address are the same and writes the code from the first register in the first address of the memory.Type: ApplicationFiled: April 18, 2013Publication date: October 23, 2014Applicant: Xerox CorporationInventor: XING LI
-
Publication number: 20140310453Abstract: A shiftable memory supporting atomic operation employs built-in shifting capability to shift a contiguous subset of data from a first location to a second location within memory during an atomic operation. The shiftable memory includes the memory to store data. The memory has the built-in shifting capability. The shiftable memory further includes an atomic primitive defined on the memory to operate on the contiguous subset.Type: ApplicationFiled: October 27, 2011Publication date: October 16, 2014Inventors: Wojciech Golab, Matthew D. Picket, Alan H. Karp
-
Publication number: 20140304467Abstract: Shiftable memory employs ring registers to shift a contiguous subset of data words stored in the ring registers within the shiftable memory. A shiftable memory includes a memory having built-in word-level shifting capability. The memory includes a plurality of ring registers to store data words. A contiguous subset of data words is shiftable between sets of the ring registers of the plurality from a first location to a second location within the memory. The contiguous subset of data words has a size that is smaller than a total size of the memory. The memory shifts only data words stored inside the contiguous subset when the contiguous subset is shifted.Type: ApplicationFiled: October 27, 2011Publication date: October 9, 2014Inventors: Matthew D. Pickett, R. Stanley Williams, Gilberto M. Ribeiro
-
Publication number: 20140289455Abstract: A patching circuit for patching a memory 2 is disclosed. An address register 1 is configured to store a first memory address. A comparison unit 4 is configured to receive a second memory address from an address bus 5, and to receive the first memory address. The comparison unit is further configured to compare the first memory address with the second memory address. A selecting unit 7 is configured to receive a value from a data register 3 associated with the address register 1, and a value from an input data bus 8, wherein the second value corresponds to the value stored in a position of the memory 2 identified by the second memory address. The selecting unit 7 is further configured to select one of the values based on the comparison performed, and to send the value to an output data bus 10.Type: ApplicationFiled: September 16, 2013Publication date: September 25, 2014Applicant: Dialog Semiconductor B.V.Inventors: Jakobus Johannes Verhallen, Gerardus Antionius Maria Wolters, Nikolaos Moschopoulos, Konstantinos Ninos
-
Patent number: 8832393Abstract: In described embodiments, a multiple first-in, first-out buffer pointers (multi-FIFO pointers) alignment system includes synchronization circuitry to align multiple FIFO buffer operations. A FIFO read clock stoppage signal is generated by master logic that stops the read clock shared by all the transmit channels and then re-starts the read clock to align them. The FIFO read clock stoppage signal is applied to the read clock of all FIFOs which need to be aligned and, when rate change is needed, the FIFO read clock stoppage signal suspends the read clock, causing local write and read pointers to be reset. After the FIFO read clock stoppage signal is de-asserted, the read clock starts to all FIFOs concurrently, thereby aligning the channels.Type: GrantFiled: April 18, 2012Date of Patent: September 9, 2014Assignee: LSI CorporationInventors: Jung Ho Cho, Vladimir Sindalovsky, Lane A. Smith
-
Publication number: 20140223090Abstract: An electronic apparatus that includes a controlled device with a plurality of control registers. A data bus is coupled between the controlled device and a processor, and an interface is configured to receive a plurality of portions of data read from or to be written to the plurality of control registers. The electronic apparatus also includes a correlation circuit configured to associate at least some of the plurality of portions of data with respective physical addresses of the plurality of control registers based on respective positions of the respective portions of data within the plurality.Type: ApplicationFiled: February 1, 2013Publication date: August 7, 2014Applicant: Apple IncInventor: Michael Ross Malone
-
Patent number: 8774305Abstract: Circuitry for use in aligning bytes in a serial data signal (e.g., with deserializer circuitry that operates in part in response to a byte rate clock signal) includes a multistage shift register for shifting the serial data signal through a number of stages at least equal to (and in many cases, preferably more than) the number of bits in a byte. The output signal of any shift register stage can be selected as the output of this “bit slipping” circuitry so that any number of bits over a fairly wide range can be “slipped” to produce or help produce appropriately aligned bytes. The disclosed bit slipping circuitry is alternatively or additionally usable in helping to align (“deskew”) two or more serial data signals that are received via separate communication channels.Type: GrantFiled: June 3, 2013Date of Patent: July 8, 2014Assignee: Altera CorporationInventor: Richard Yen-Hsiang Chang
-
Publication number: 20140149657Abstract: An electronic apparatus may be provided that includes a processor to perform operations, and a memory subsystem including a plurality of parallel memory banks to store a two-dimensional (2D) array of data using a shifted scheme. Each memory bank may include at least two elements per bank word.Type: ApplicationFiled: December 28, 2012Publication date: May 29, 2014Inventors: Radomir Jakovljevic, Aleksandar Beric, Edwin Van Dalen, Dragan Milicev
-
Publication number: 20140095786Abstract: A semiconductor device includes a first stage register for storing events occurring for a first period, a second stage register for storing events occurring for a second period shorter than the first period and a controller for controlling the second stage register to select events from the second stage register each having a reference value larger than a second threshold value to the first stage register and for controlling the first stage register to store events which are selected from the second stage register.Type: ApplicationFiled: August 12, 2013Publication date: April 3, 2014Applicant: SK hynix Inc.Inventors: Young-Suk MOON, Yong-Kee KWON, Hong-Sik KIM
-
Publication number: 20140047214Abstract: An aspect includes accessing a vector register in a vector register file. The vector register file includes a plurality of vector registers and each vector register includes a plurality of elements. A read command is received at a read port of the vector register file. The read command specifies a vector register address. The vector register address is decoded by an address decoder to determine a selected vector register of the vector register file. An element address is determined for one of the plurality of elements associated with the selected vector register based on a read element counter of the selected vector register. A word is selected in a memory array of the selected vector register as read data based on the element address. The read data is output from the selected vector register based on the decoding of the vector register address by the address decoder.Type: ApplicationFiled: August 9, 2012Publication date: February 13, 2014Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Bruce M. Fleischer, Thomas W. Fox, Hans M. Jacobson, Ravi Nair
-
Publication number: 20140025884Abstract: A transactional memory (TM) of an island-based network flow processor (IB-NFP) integrated circuit receives a Stats Add-and-Update (AU) command across a command mesh of a Command/Push/Pull (CPP) data bus from a processor. A memory unit of the TM stores a plurality of first values in a corresponding set of memory locations. A hardware engine of the TM receives the AU, performs a pull across other meshes of the CPP bus thereby obtaining a set of addresses, uses the pulled addresses to read the first values out of the memory unit, adds the same second value to each of the first values thereby generating a corresponding set of updated first values, and causes the set of updated first values to be written back into the plurality of memory locations. Even though multiple count values are updated, there is only one bus transaction value sent across the CPP bus command mesh.Type: ApplicationFiled: July 18, 2012Publication date: January 23, 2014Applicant: Netronome Systems, Inc.Inventors: Gavin J. Stark, Benjamin J. Cahill
-
Publication number: 20140019679Abstract: An apparatus and method are disclosed to implement digital signal processing operations involving multiply-accumulate (MAC) operations, by using a modified balanced data structure and accessing architecture. This architecture maintains a data-path connecting one address generation unit, one register file and one MAC execution unit. The register file has a hierarchical grouping organization of individual registers, which reduces bubble cycles caused by memory misalignments. This architecture uses parallel execution and can achieve two or more MAC operations per cycle.Type: ApplicationFiled: July 8, 2013Publication date: January 16, 2014Inventors: PengFei Zhu, HongXia Sun, YongQiang Wu, Elio Guidetti
-
Patent number: RE45097Abstract: An input/output processor for speeding the input/output and memory access operations for a processor is presented. The key idea of an input/output processor is to functionally divide input/output and memory access operations tasks into a compute intensive part that is handled by the processor and an I/O or memory intensive part that is then handled by the input/output processor. An input/output processor is designed by analyzing common input/output and memory access patterns and implementing methods tailored to efficiently handle those commonly occurring patterns. One technique that an input/output processor may use is to divide memory tasks into high frequency or high-availability components and low frequency or low-availability components.Type: GrantFiled: February 2, 2012Date of Patent: August 26, 2014Assignee: Cisco Technology, Inc.Inventors: Sundar Iyer, Nick McKeown