Patents Issued in August 18, 2016
  • Publication number: 20160239390
    Abstract: Methods, computer systems, and computer program products for configuring a redundant array of independent disks (RAID) array by a processor device, include, within a RAID array, configuring spare failover disks to run as cold spares, such that the cold spare disks stay in a powered-down standby mode, wherein each cold spare disk is powered on individually at predetermined intervals, tested, and powered back down to standby mode.
    Type: Application
    Filed: February 13, 2015
    Publication date: August 18, 2016
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Richard C. MYERS, Randolph E. STIARWALT
  • Publication number: 20160239391
    Abstract: In a management computer, a memory stores: association information indicating an association among a first physical computer, a virtual computer that is implemented by the first physical computer, a first physical resource that is allocated to the virtual computer, and a user who uses the virtual computer; failure information indicating a failed physical resource; and an upper limit value for a destruction amount as an amount of a physical resource that has failed by being used by the user. A processor calculates the destruction amount, and transmits, upon determining that the first physical resource has failed, that the destruction amount is equal to or less than the upper limit value, and that any of a plurality of physical computers includes the second physical resource, to the first physical computer an instruction to allocate a second physical resource as a replacement for the first physical resource to the virtual computer.
    Type: Application
    Filed: January 31, 2014
    Publication date: August 18, 2016
    Applicant: HITACHI, LTD.
    Inventors: Tomoyuki SAGIYAMA, Tomohito UCHIDA
  • Publication number: 20160239392
    Abstract: For disaster recovery involving a first site and a disaster recovery site, where at least a portion of management service metadata not isolated within the management service, a failover process is initiated, including creating an initial snapshot of the distributed metadata state. In a failback process, a representation is created of state changes for the management service and a delta description is calculated therefrom. The delta description is transmitted to the first site; and a reverse replica is created, at the first site, of all the workload components from the disaster recovery site. The delta description is played back to restore a distributed metadata state that existed in the disaster recovery site and to re-create it in the first site.
    Type: Application
    Filed: February 16, 2015
    Publication date: August 18, 2016
    Inventors: Yu Deng, Ruchi Mahindru, HariGovind V. Ramasamy, Soumitra Sarkar, Long Wang
  • Publication number: 20160239393
    Abstract: Embodiments of the invention relate to faulty recovery mechanisms for a three-dimensional (3-D) network on a processor array. One embodiment comprises a multidimensional switch network for a processor array. The switch network comprises multiple switches for routing packets between multiple core circuits of the processor array. The switches are organized into multiple planes. The switch network further comprises a redundant plane including multiple redundant switches. Multiple data paths interconnect the switches. The redundant plane is used to facilitate full operation of the processor array in the event of one or more component failures.
    Type: Application
    Filed: April 21, 2016
    Publication date: August 18, 2016
    Inventors: Rodrigo Alvarez-Icaza Rivera, John V. Arthur, John E. Barth, JR., Andrew S. Cassidy, Subramanian Iyer, Paul A. Merolla, Dharmendra S. Modha
  • Publication number: 20160239394
    Abstract: This technology identifies one or more nodes with a failure, designates the identified one or more nodes as ineligible to service any I/O operation, and disables I/O ports of the identified one or more nodes. Another one or more nodes are selected to service any I/O operation of the identified one or more nodes based on a stored failover policy. Any of the I/O operations are directed to the selected another one or more nodes for servicing and then routing of any of the serviced I/O operations via a switch to the identified one or more nodes to execute any of the routed I/O operations with a storage device. An identification is made when the identified one or more nodes is repaired. The designation as ineligible is removed and one or more I/O ports of the identified one or more nodes are enabled when the repair is identified.
    Type: Application
    Filed: February 13, 2015
    Publication date: August 18, 2016
    Inventors: Venkata Ramprasad Darisa, Nandakumar Ravindranath Allu, Rajesh Nagarajan
  • Publication number: 20160239395
    Abstract: Methods, apparatuses, systems, and devices are described for redundancy management for a storage system including a plurality of storage devices. Approaches for redundancy management may involve storage device failure prediction techniques and/or a redundancy value associated with a data file. In one example, a copy of the file may be stored on at least two storage devices. Whether or not to store an additional copy of the file on another storage device may be based at least in part on the redundancy value for the file. In another example, a determination may be made whether to store a copy of the file on another storage device when a storage device storing a copy of the file is predicted to fail. Whether to store a copy of the file on another storage device may be based at least in part on a redundancy value associated with the file.
    Type: Application
    Filed: February 16, 2015
    Publication date: August 18, 2016
    Applicant: SEAGATE TECHNOLOGY LLC
    Inventors: CHRISTIAN BRUUN MADSEN, ANDREI KHURSHUDOV, ZACHARY ALEXANDER
  • Publication number: 20160239396
    Abstract: For disaster recovery involving a first site and a disaster recovery site, where at least a portion of management service metadata not isolated within the management service, a failover process is initiated, including creating an initial snapshot of the distributed metadata state. In a failback process, a representation is created of state changes for the management service and a delta description is calculated therefrom. The delta description is transmitted to the first site; and a reverse replica is created, at the first site, of all the workload components from the disaster recovery site. The delta description is played back to restore a distributed metadata state that existed in the disaster recovery site and to re-create it in the first site.
    Type: Application
    Filed: November 21, 2015
    Publication date: August 18, 2016
    Inventors: Yu Deng, Ruchi Mahindru, HariGovind V. Ramasamy, Soumitra Sarkar, Long Wang
  • Publication number: 20160239397
    Abstract: Techniques for faster reconstruction of segments using a dedicated spare memory unit are described. Zone segments in memory units are associated with a dedicated spare memory unit. The zone segments are reconstructed in the dedicated spare memory unit in response to a failed memory unit except for an identified failed zone segment of the failed memory unit. The identified failed zone segment of the failed memory unit is retained in the dedicated spare unit. Other embodiments are described and claimed.
    Type: Application
    Filed: February 12, 2015
    Publication date: August 18, 2016
    Applicant: NETAPP, INC.
    Inventors: Arvind Thomas, Premnath Bysani
  • Publication number: 20160239398
    Abstract: Methods of geophysical prospecting and surveying are disclosed herein. The methods include obtaining a raw data set representing energy signatures recorded over an area of the earth and using a computer to form a final data set representing the physical properties of the area of the earth, the process including combining physical property data subsets into a final data set using a quality statistic for each physical property data subset or each datum of each physical property data subset as a weighting factor to compute a weighted average.
    Type: Application
    Filed: May 20, 2015
    Publication date: August 18, 2016
    Inventors: Carl Joel Gustav SKOGMAN, Lars Erik Magnus BJORNEMO
  • Publication number: 20160239399
    Abstract: A system and method for analyzing big data activities are disclosed. According to one embodiment, a system comprises a distributed file system for the entities and applications, wherein the applications include one or more of script applications, structured query language (SQL) applications, Not Only (NO) SQL applications, stream applications, search applications, and in-memory applications. The system further comprises a data processing platform that gathers, analyzes, and stores data relating to entities and applications. The data processing platform includes an application manager having one or more of a MapReduce Manage, a script applications manager, a structured query language (SQL) applications manager, a Not Only (NO) SQL applications manager, a stream applications manager, a search applications manager, and an in-memory applications manager. The application manager identifies if the applications are one or more of slow-running, failed, killed, unpredictable, and malfunctioning.
    Type: Application
    Filed: February 18, 2016
    Publication date: August 18, 2016
    Applicant: Unravel Data Systems, Inc.
    Inventors: Shivnath Babu, Kunal Agarwal
  • Publication number: 20160239400
    Abstract: Provided are a computer program product, system, and method for embedding and executing trace functions in code to gather trace data. A plurality of trace functions are embedded in the code. For each embedded trace function, a trace level is included indicating code to which the trace applies. The trace level comprises one of a plurality of levels. During the execution of the code, the embedded trace functions having one of the levels associated with a specified at least one level specified are executed. The embedded trace functions associated with at least one level not comprising one of the at least one specified level are not invoked.
    Type: Application
    Filed: April 22, 2016
    Publication date: August 18, 2016
    Inventors: Herve G.P. Andre, Yolanda Colpo, Enrique Q. Garcia, Mark E. Hack, Larry Juarez, Ricardo S. Padilla, Todd C. Sorenson
  • Publication number: 20160239401
    Abstract: A method to determine a relationship between inputs and outputs based on a parametric model may include receiving a data set that includes known inputs and corresponding known outputs associated with a component. The method also includes generating a parametric model to automatically determine a functionality of the component based on the data set by selecting the parametric model from multiple types of parametric models based on a data type associated with the data set. The method also includes determining whether the parametric model applies to the data set. The method also includes, responsive to determining that the parametric model applies to the data set, receiving a new output associated with the component. The method also includes determining a new input from the new output based on the parametric model.
    Type: Application
    Filed: February 16, 2015
    Publication date: August 18, 2016
    Inventor: Guodong LI
  • Publication number: 20160239402
    Abstract: A risk level of a software commit is assessed through the use of a classifier.
    Type: Application
    Filed: October 30, 2013
    Publication date: August 18, 2016
    Inventors: Gil Zieder, Boris Kozorovitzky, Ofer Eliassaf, Efrat Egozi Livi, Ohad Assulin
  • Publication number: 20160239403
    Abstract: An apparatus and method are provided for controlling debugging of program instructions that include a transaction, where the transaction is executed on processing circuitry and comprises a number of program instructions that execute to generate updates to state data, and where those updates are only committed if the transaction completes without a conflict. In addition to the processing circuitry, the apparatus has control storage for storing at least one watchpoint identifier, and the processing circuitry is then arranged, when detecting a watchpoint match condition with reference to the at least one watchpoint identifier during execution of a program instruction within the transaction, to create a pending watchpoint debug event. The processing circuitry is then responsive to execution of the transaction finishing to initiate a watchpoint debug event for the pending watchpoint debug event.
    Type: Application
    Filed: January 27, 2016
    Publication date: August 18, 2016
    Inventor: Michael John WILLIAMS
  • Publication number: 20160239404
    Abstract: An apparatus and method are provided for controlling debugging of program instructions executed on processing circuitry, where the program instructions include a transaction comprising a number of program instructions that execute to generate updates to state data, with the processing circuitry then committing the updates if the transaction completes without a conflict. In addition to the processing circuitry, the apparatus has control storage for storing stepping control data used to control operation of the processing circuitry. The processing circuitry is responsive to the stepping control data having a first value to operate in a single stepping mode, where the processing circuitry initiates a debug event following execution of each instruction.
    Type: Application
    Filed: January 27, 2016
    Publication date: August 18, 2016
    Inventor: Michael John WILLIAMS
  • Publication number: 20160239405
    Abstract: A data processing apparatus is provided comprising data processing circuitry and debug circuitry. The debug circuitry controls operation of the processing circuitry when operating in a debug mode. The data processing circuitry determines upon entry into a debug mode a current operating state of the data processing apparatus. The data processing circuitry allocates one of a plurality of instruction sets to be used as a debug instruction set depending upon the determined current operating state.
    Type: Application
    Filed: April 28, 2016
    Publication date: August 18, 2016
    Inventors: Michael John WILLIAMS, Richard Roy GRISENTHWAITE, Simon John CRASKE
  • Publication number: 20160239406
    Abstract: Mechanisms are provided for propagating source identification information from an application front-end system in an application layer to a data layer inspection system associated with a back-end system. An incoming user request is received, at the data layer inspection system, from a gateway system associated with the application front-end system. One or more outgoing statements targeting a back-end system are received at the data layer inspection system. The data layer inspection system accesses a mapping data structure based on the one or more outgoing statements to thereby correlate the one or more outgoing statements with the incoming user request. The data layer inspection system retrieves source identification information associated with the incoming user request based on the correlation of the one or more outgoing statements with the incoming user request. The data layer inspection system performs a data layer inspection operation based on the source identification information.
    Type: Application
    Filed: April 22, 2016
    Publication date: August 18, 2016
    Inventors: Ron Ben-Natan, Leonid Rodniansky
  • Publication number: 20160239407
    Abstract: Provided are methods and systems for automated generation of small scale integration tests to keep mocked input-output contract expectations of external objects synchronized with the actual implementation of the external objects. Such synchronization is achieved through automated creation of small scale integration tests by replacing expected input-output behaviors of mocked interactions with actual code sequences of the mocked interaction. The methods and systems utilize automated test generators with search-based software engineering methods to reuse and adapt developer written tests into new automatically generated tests.
    Type: Application
    Filed: February 18, 2015
    Publication date: August 18, 2016
    Applicant: GOOGLE INC.
    Inventor: Franjo IVANCIC
  • Publication number: 20160239408
    Abstract: Systems and methods for profiling application code are disclosed. The method is hybrid in nature as it may include inserting instrumentation within application code and also periodic sample gathering, by employing a runtime app profile generator that provides the hybrid profiling infrastructure and is linked to the application code. An executable user application is then generated from the application code, and the executable user application is executed. The runtime app profile generator is then launched in response to the execution of the application code, and hybrid profiling results are generated by obtaining samples from the different threads of the executed application code and accumulating instrumented execution information. In some implementations, the hybrid profiling results capture even cold regions of the code and can also be used for a next round of profiling through automated targeted instrumentation.
    Type: Application
    Filed: February 1, 2016
    Publication date: August 18, 2016
    Inventors: Dineel D. Sule, Subrato K. De, Wilson Kwan
  • Publication number: 20160239409
    Abstract: A method for testing a web service using inherited test attributes is described. The method includes generating a test template for a web service entry point, in which a test template comprises a number of test attributes, generating a number of test elements based on the test template, in which a test element inherits the number of test attributes, and executing the number of test elements.
    Type: Application
    Filed: October 17, 2013
    Publication date: August 18, 2016
    Inventors: Ricardo Alexandre de Oliveira Staudt, Hugo Vares Vieira, Karine de Pinho Peralta, Mairo Pedrini
  • Publication number: 20160239410
    Abstract: Atomically updating shared data in a transactional memory system comprising transactional memory storage and a transactional memory enabled processor. The computer creates a pointer stored in a stable memory location that is used to locate a shared data stored in a second memory location. The computer accesses the shared data and loads the pointer used to locate the accessed shared data into transactional memory storage. The computer updates the accessed shared data using copy-on-write, whereby the updated shared data is stored in a third memory location, and performs the atomic update of the shared data by updating the pointer such that it locates the updated shared data stored in the third memory location.
    Type: Application
    Filed: February 17, 2015
    Publication date: August 18, 2016
    Inventors: Bishwaranjan Bhattacharjee, Mustafa Canim, Mohammad S. Hamedani
  • Publication number: 20160239411
    Abstract: The present disclosure relates to examples of controlling recycling of blocks of memory. In one example implementation according to aspects of the present disclosure, a method comprises determining whether to reclaim one or more blocks of a memory. The method further comprises allocating at least one of the blocks to be written in accordance with the equalizing, in response to the determining, and selected from a subset of the blocks, wherein a respective lifetime factor is below a threshold set prior to the allocating.
    Type: Application
    Filed: April 25, 2016
    Publication date: August 18, 2016
    Inventor: Radoslav Danilak
  • Publication number: 20160239412
    Abstract: A storage apparatus comprises a plurality of storage devices that form a storage volume, a data buffer, and a first control unit that controls the storage apparatus and the data buffer. Each storage device includes a nonvolatile memory that includes a plurality of erasable memory blocks, and a second control unit that controls the nonvolatile memory. The second control unit is configured to execute a garbage collection process. The first control unit is configured to save in the data buffer data received by the storage apparatus for storage in a particular storage device when the data are received during a time period in which the particular storage devices is executing a garbage collection process, and write the data that are saved in the data buffer into the particular one of the plurality of storage devices after the garbage collection process is completed.
    Type: Application
    Filed: August 26, 2015
    Publication date: August 18, 2016
    Inventor: Shintaro WADA
  • Publication number: 20160239413
    Abstract: Controlling garbage collection operations. The method includes setting up garbage collection to collect objects that are no longer in use in a managed code environment. The method further includes receiving managed code input specifying a desired quantum within which it is desired that garbage collection not be performed. The method further includes performing a computing operation to determine the desired quantum can likely be met. The method further includes running memory operations within the quantum without running the initialized garbage collection.
    Type: Application
    Filed: February 13, 2015
    Publication date: August 18, 2016
    Inventors: Maoni Zhang Stephens, Patrick Henri Dussud
  • Publication number: 20160239414
    Abstract: Embodiments of the invention provide a method and system for dynamic memory management implemented in hardware. In an embodiment, the method comprises storing objects in a plurality of heaps, and operating a hardware garbage collector to free heap space. The hardware garbage collector traverses the heaps and marks selected objects, uses the marks to identify a plurality of the objects, and frees the identified objects. In an embodiment, the method comprises storing objects in a heap, each of at least some of the objects including a multitude of pointers; and operating a hardware garbage collector to free heap space. The hardware garbage collector traverses the heap, using the pointers of some of the objects to identify others of the objects; processes the objects to mark selected objects; and uses the marks to identify a group of the objects, and frees the identified objects.
    Type: Application
    Filed: April 28, 2016
    Publication date: August 18, 2016
    Inventors: David F. Bacon, Perry S. Cheng, Sunil K. Shukla
  • Publication number: 20160239415
    Abstract: A server apparatus comprises a plurality of server on a chip (SoC) nodes interconnected to each other through a node interconnect fabric. Each one of the SoC nodes has respective memory resources integral therewith. Each one of the SoC nodes has information computing resources accessible by one or more data processing systems. Each one of the SoC nodes configured with memory access functionality enabling allocation of at least a portion of said memory resources thereof to one or more other ones of the SoC nodes and enabling allocation of at least a portion of said memory resources of one or more other ones of the SoC nodes thereto based on a workload thereof.
    Type: Application
    Filed: February 12, 2016
    Publication date: August 18, 2016
    Inventors: Mark Bradley Davis, Barry Ross Evans, David James Borland
  • Publication number: 20160239416
    Abstract: A method for reading data from a storage unit of a flash memory, performed by a processing unit, including at least the following steps: A first read command is received from a master device via a first access interface. It is determined whether data requested by the first read command has been cached in a first buffer, which caches continuous data obtained from a storage unit. A second access interface is directed to read the data requested by the first read command from the storage unit and store the read data in a second buffer and the first access interface is directed to read the data requested by the first read command from the second buffer and clock the read data out to the master device when data requested by the first read command has not been cached in the first buffer.
    Type: Application
    Filed: May 1, 2015
    Publication date: August 18, 2016
    Inventor: Yang-Chih Shen
  • Publication number: 20160239417
    Abstract: In one embodiment, a computing system includes a cache having one or more memories and a cache manager. The cache manager is able to receive a request to write data to a first portion of the cache, write the data to the first portion of the cache, update a first map corresponding to the first portion of the cache, receive a request to read data from the first portion of the cache, read from a storage communicatively linked to the computing system data according to the first map, and update a second map corresponding to the first portion of the cache. The cache manager may also be able to write data to the storage according to the first map.
    Type: Application
    Filed: April 22, 2016
    Publication date: August 18, 2016
    Inventors: Scott David Peterson, Christopher August Shaffer, Phillip E. Krueger
  • Publication number: 20160239418
    Abstract: In one embodiment, a computer-implemented method includes detecting a cache miss for a cache line. A resource is reserved on each of one or more remote computing nodes, responsive to the cache miss. A request for a state of the cache line on the one or more remote computing nodes is broadcast to the one or more remote computing nodes, responsive to the cache miss. A resource credit is received from a first remote computing node of the one or more remote computing nodes, responsive to the request. The resource credit indicates that the first remote computing node will not participate in completing the request. The resource on the first remote computing node is released, responsive to receiving the resource credit from the first remote computing node.
    Type: Application
    Filed: February 13, 2015
    Publication date: August 18, 2016
    Inventors: Garrett M. Drapala, Vesselina K. Papazova, Robert J. Sonnelitter, III
  • Publication number: 20160239419
    Abstract: Embodiments of the disclosure relate to optimizing a memory nest for a workload. Aspects include an operating system determining the cache/memory footprint of each work unit of the workload and assigning a time slice to each work unit of the workload based on the cache/memory footprint of each work unit. Aspects further include executing the workload on a processor by providing each work unit access to the processor for the time slice assigned to each work unit.
    Type: Application
    Filed: February 12, 2015
    Publication date: August 18, 2016
    Inventors: ANSU A. ABRAHAM, DANIEL V. ROSA, DONALD W. SCHMIDT
  • Publication number: 20160239420
    Abstract: In one embodiment, a system includes a processor and a memory communicatively coupled to the processor. The processor is configured to receive a write request associated with a cache pool, which comprises a plurality of disks. The write request comprises data associated with the write request. The processor is additionally configured to select a first disk from the plurality of disks using a life parameter associated with the first disk. The processor is further configured to cause the data associated with the write request to be written to the first disk.
    Type: Application
    Filed: February 16, 2015
    Publication date: August 18, 2016
    Inventors: Sandeep Agarwal, Anup Atluri, Ashokan Vellimalai, Deepu Syram Sreedhar M
  • Publication number: 20160239421
    Abstract: Embodiments of the disclosure relate to optimizing a memory nest for a workload. Aspects include an operating system determining the cache/memory footprint of each work unit of the workload and assigning a time slice to each work unit of the workload based on the cache/memory footprint of each work unit. Aspects further include executing the workload on a processor by providing each work unit access to the processor for the time slice assigned to each work unit.
    Type: Application
    Filed: September 11, 2015
    Publication date: August 18, 2016
    Inventors: ANSU A. ABRAHAM, DANIEL V. ROSA, DONALD W. SCHMIDT
  • Publication number: 20160239422
    Abstract: A cache system includes a processor chip to receive a processing unit address. The cache system also includes a comparator to compare the processing unit address to an address information stored in in an allocated tag subset of a tag memory of the processor chip to determine whether the processing unit address matches the address information. The cache system further includes a mapping device to map the portion of the address information to an external memory data, temporarily stored in an allocated data memory subset and a corresponding data memory set of a data memory in the processor. Furthermore, the cache system includes a stacking loop to prioritize the allocated tag subset and a corresponding tag set when the processing unit address matches the address information.
    Type: Application
    Filed: February 17, 2015
    Publication date: August 18, 2016
    Inventors: DIPANKAR TALUKDAR, ALAN HERRING
  • Publication number: 20160239423
    Abstract: In order to prevent data thrashing and the resulting performance degradation, a computer system may maintain an application-layer cache space to more effectively use physical memory and, thus, significantly improve an application-memory hit ratio and reduce disk input-output operations. In particular, the computer system may maintain a managed memory cache that is separate from a page cache. The managed memory cache may be managed according to predefined caching rules that are separate from the caching rules in the operating system that are used to manage the page cache, and these caching rules may be application-aware. Subsequently, when data for an application is accessed, the computer system may prefetch the data and associated information from disk and store the information in the managed memory cache based on data correlations associated with the application.
    Type: Application
    Filed: February 17, 2015
    Publication date: August 18, 2016
    Applicant: Linkedln Corporation
    Inventors: Zhenyun Zhuang, Haricharan K. Ramachandra, Badrinath K. Sridharan, Cuong H. Tran
  • Publication number: 20160239424
    Abstract: A method, hybrid server system, and computer program product, prefetch data. A set of prefetch requests associated with one or more given datasets residing on the server system are received from a set of accelerator systems. A set of data is prefetched from a memory system residing at the server system for at least one prefetch request in the set of prefetch requests. The set of data satisfies the at least one prefetch request. The set of data that has been prefetched is sent to at least one accelerator system, in the set of accelerator systems, associated with the at least one prefetch request.
    Type: Application
    Filed: April 27, 2016
    Publication date: August 18, 2016
    Applicant: International Business Machines Corporation
    Inventors: Yuk Lung CHAN, Rajaram B. KRISHNAMURTHY, Carl Joseph PARRIS
  • Publication number: 20160239425
    Abstract: A method, hybrid server system, and computer program product, prefetch data. A set of prefetch requests associated with one or more given datasets residing on the server system are received from a set of accelerator systems. A set of data is prefetched from a memory system residing at the server system for at least one prefetch request in the set of prefetch requests. The set of data satisfies the at least one prefetch request. The set of data that has been prefetched is sent to at least one accelerator system, in the set of accelerator systems, associated with the at least one prefetch request.
    Type: Application
    Filed: April 27, 2016
    Publication date: August 18, 2016
    Applicant: International Business Machines Corporation
    Inventors: Yuk Lung CHAN, Rajaram B. KRISHNAMURTHY, Carl Joseph PARRIS
  • Publication number: 20160239426
    Abstract: A computer-implemented method includes generating a vector that is a random number. Two or more residue functions are applied to the vector to produce a state signal including a different number of states. A set status of a set-associative storage container in a computer system is determined. The set status identifies whether each set of the set-associative storage container is enabled or disabled. One of the state signals is selected that has a same number of states as a number of the sets that are enabled. The selected state signal is mapped to the sets that are enabled to assign each of the states of the selected state signal to a corresponding one of the sets that are enabled. A set selection of the set-associative storage container is output based on the mapping to randomly select one of the sets that are enabled from the set-associative storage container.
    Type: Application
    Filed: February 18, 2015
    Publication date: August 18, 2016
    Inventors: Steven R. Carlough, Adam B. Collura
  • Publication number: 20160239427
    Abstract: A computer-implemented method includes generating a vector that is a random number. Two or more residue functions are applied to the vector to produce a state signal including a different number of states. A set status of a set-associative storage container in a computer system is determined. The set status identifies whether each set of the set-associative storage container is enabled or disabled. One of the state signals is selected that has a same number of states as a number of the sets that are enabled. The selected state signal is mapped to the sets that are enabled to assign each of the states of the selected state signal to a corresponding one of the sets that are enabled. A set selection of the set-associative storage container is output based on the mapping to randomly select one of the sets that are enabled from the set-associative storage container.
    Type: Application
    Filed: March 11, 2016
    Publication date: August 18, 2016
    Inventors: Steven R. Carlough, Adam B. Collura
  • Publication number: 20160239428
    Abstract: A system, method, and computer program product are provided for implementing a reliable placement engine for a block device. The method includes the steps of tracking one or more parameters associated with a plurality of real storage devices (RSDs), generating a plurality of RSD objects in a memory associated with a first node, generating a virtual storage device (VSD) object in the memory, and selecting one or more RSD objects in the plurality of RSD objects based on the one or more parameters. Each RSD object corresponds to a particular RSD in the plurality of RSDs. The method also includes the step of, for each RSD object in the one or more RSD objects, allocating a block of memory in the RSD associated with the RSD object to store data corresponding to a first block of memory associated with the VSD object.
    Type: Application
    Filed: April 22, 2016
    Publication date: August 18, 2016
    Inventors: Philip Andrew White, Hank T. Hsieh
  • Publication number: 20160239429
    Abstract: A data access system including a storage device and a processor, which includes one or more levels of cache (LOC). In response to data required by the processor not being within the LOC, the processor generates a physical address to be accessed within the storage device in order to retrieve the data. The storage device includes a main memory and a cache module, which is configured as a final level of cache (FLOC) to be accessed by the processor prior to accessing the main memory. The cache module includes a controller that, in response to the data not being cached within the LOC, converts the physical address into a virtual address within the FLOC. The FLOC uses the virtual address to determine whether the data is within the FLOC. If the data is not within the FLOC, the cache module or the processor retrieves the data from the main memory.
    Type: Application
    Filed: April 25, 2016
    Publication date: August 18, 2016
    Inventor: Sehat Sutardja
  • Publication number: 20160239430
    Abstract: A processing device receives a first request from a virtual machine to register a memory region to a hardware device. The processing device generates a first key for the memory region, wherein the memory region is not registered to the hardware device. The processing device generates a second key for a shared memory pool that is pinned and registered to the hardware device. The processing device generates a mapping of the first key to the second key. The processing device sends a response to the virtual machine that the memory region has been registered to the hardware device, the notification comprising the first key.
    Type: Application
    Filed: February 12, 2015
    Publication date: August 18, 2016
    Inventors: Michael Tsirkin, Marcel Apfelbaum
  • Publication number: 20160239431
    Abstract: A processor includes a processing core to execute an application comprising instructions encoding a transaction with a persistent memory via a non-persistent cache, wherein the transaction is to create a mapping from a virtual address space to a memory region identified by a memory region identifier (MRID) in the persistent memory, and tag a cache line of the non-persistent cache with the MRID, in which the cache line is associated with a cache line status, and a cache controller, in response to detecting a failure event, to selectively evict contents of the cache line to the memory region identified by the MRID based on the cache line status.
    Type: Application
    Filed: February 13, 2015
    Publication date: August 18, 2016
    Inventors: SHENG LI, SANJAY KUMAR, VICTOR W. LEE, RAJESH M. SANKARAN, SUBRAMANYA R. DULLOOR
  • Publication number: 20160239432
    Abstract: In order to prevent data thrashing and the resulting performance degradation, a computer system may maintain an application-layer cache space to more effectively use physical memory and, thus, significantly improve an application-memory hit ratio and reduce disk input-output operations. In particular, the computer system may maintain a managed memory cache that is separate from an operating systems' default page cache. The managed memory cache may be managed according to predefined caching rules that are separate from rules used to manage the page cache. Moreover, at least one of the data entries in the managed memory cache may have a page size that is smaller than a minimum page size of the page cache. Furthermore, at least some of the data entries in the managed memory cache may have different page sizes and, more generally, different associated predefined caching rules.
    Type: Application
    Filed: February 17, 2015
    Publication date: August 18, 2016
    Applicant: LinkedIn Corporation
    Inventors: Zhenyun Zhuang, Haricharan K. Ramachandra, Badrinath K. Sridharan, Cuong H. Tran
  • Publication number: 20160239433
    Abstract: Systems and methods for predictive cache replacement policies are provided. In particular, some embodiments dynamically capture and predict access patterns of data to determine which data should be evicted from the cache. A novel tree data structure can be dynamically built that allows for immediate use in the identification of developing patterns and the eviction determination. In some cases, the data can be dynamically organized into histograms, strings, and other representations allowing traditional analysis techniques to be applied. Data organized into histogram-like structures can also be converted into strings allowing for well-known string pattern recognition analysis. The pattern recognition and prediction techniques disclosed also have applications outside of caching.
    Type: Application
    Filed: April 22, 2016
    Publication date: August 18, 2016
    Inventor: Eitan Frachtenberg
  • Publication number: 20160239434
    Abstract: A computer-implemented method includes receiving a request to access a cache entry in a shared cache. The request references a synonym for the cache entry. A cache directory of the shared cache includes, for each cache entry of the shared cache, a first-ranked synonym slot for storing a most recently used synonym for the cache entry and a second-ranked synonym slot for storing a second most recently used synonym for the cache entry. The method includes, based on receiving the request, writing contents of the first-ranked synonym slot for the cache entry to the second-ranked synonym slot for the cache entry, and writing the synonym referenced in the request to the first-ranked synonym slot for the cache entry.
    Type: Application
    Filed: February 13, 2015
    Publication date: August 18, 2016
    Inventors: Deanna Postles Dunn Berger, Michael F. Fee, Arthur J. O'Neil, JR., Robert J. Sonnelitter, III
  • Publication number: 20160239435
    Abstract: A method and system for detecting tampering of authenticated memory blocks that are accessible by an untrusted host processor. by (1) periodically re-authenticating the memory blocks from a trusted computing environment, and (2) disabling accessing of the memory blocks by the untrusted host processor when the re-authenticating fails. In one implementation, each of the memory blocks has an authentication code, and the accessing of the memory blocks is disabled by disabling the untrusted host processor. The memory blocks may be re-authenticated sequentially.or randomly, e.g., based on a random block selection based on the block location, or based on temporal randomness. The re-authenticating is preferably effected by an authentication module in the trusted computing environment.
    Type: Application
    Filed: February 18, 2015
    Publication date: August 18, 2016
    Inventors: Michael Kenneth Bowler, Andrew Alexander Elias
  • Publication number: 20160239436
    Abstract: A data security system includes a first computer system including: a memory for containing data, and a processing unit connected to the memory for locking and unlocking the memory, the processing unit including: a first identification unit for providing a first identification, a modifying unit connected to the first identification unit to provide a modified number, and a checking unit for receiving the modified number; and the first computer system connectible to a second computer system, the second computer system including: a second identification unit for providing a second identification, and an application for receiving the second identification to provide a further modified number; and the first computer system connectible to the checking unit to have the processing unit unlock the memory when the modified number and the further modified number are the same and lock the memory when the modified number and the further modified number are different.
    Type: Application
    Filed: April 25, 2016
    Publication date: August 18, 2016
    Inventors: Lev M. Bolotin, Simon B. Johnson
  • Publication number: 20160239437
    Abstract: Described herein are method and apparatus for servicing software components of nodes of a cluster storage system. During data-access sessions with clients, client IDs and file handles for accessing files are produced and stored to clients and stored (as session data) to each node. A serviced node is taken offline, whereby network connections to clients are disconnected. Each disconnected client is configured to retain its client ID and file handles and attempt reconnections. Session data of the serviced node is made available to a partner node (by transferring session data to the partner node). After clients have reconnected to the partner node, the clients may use the retained client IDs and file handles to continue a data-access session with the partner node since the partner node has access to the session data of the serviced node and thus will recognize and accept the retained client ID and file handles.
    Type: Application
    Filed: April 25, 2016
    Publication date: August 18, 2016
    Inventors: Nam Le, Paul Yuedong Mu, John Russell Boyles, John Eric Hoffman
  • Publication number: 20160239438
    Abstract: A processor includes a front end, an execution pipeline, and a binary translator. The front end includes logic to receive an instruction and to dispatch the instruction to a binary translator. The binary translator includes logic to determine whether the instruction includes a control-flow instruction, identify a source address of the instruction, identify a target address of the instruction, determine whether the target address is a known destination based upon the source address, and determine whether to route the instruction to the execution pipeline based upon the determination whether the target address is a known destination based upon the source address. The target address includes an address to which execution would indirectly branch upon execution of the instruction.
    Type: Application
    Filed: April 27, 2016
    Publication date: August 18, 2016
    Inventors: Petros Maniatis, Shantanu Gupta, Naveen Kumar
  • Publication number: 20160239439
    Abstract: Methods and apparatuses regarding shared buffer arbitration for packet-based switching are described. A data packet may be received by a packet buffer including a first plurality of banks of memory units and a second plurality of banks of memory units. Each memory unit may store one cell of data and accommodate one access operation in one clock cycle. In an event that the data packet includes at least two cells of data, the at least two cells of the data packet may be alternately written into at least one memory unit in the first plurality of banks of memory units and at least one memory unit in the second plurality of banks of memory units. Cells of data packets may be read from the first plurality of banks of memory units and the second plurality of banks of memory units according to a time-division multiplexing (TDM) scheme.
    Type: Application
    Filed: April 21, 2016
    Publication date: August 18, 2016
    Inventor: Kuo-Cheng Lu