Patents Issued in March 2, 2017
  • Publication number: 20170060727
    Abstract: An on-chip system uses a time measurement circuit to trap code that takes longer than expected to execute by breaking code execution on excess time consumption.
    Type: Application
    Filed: November 4, 2016
    Publication date: March 2, 2017
    Inventor: Ingar Hanssen
  • Publication number: 20170060728
    Abstract: In one embodiment, a system for program lifecycle testing includes receiving a request to test a program update at an interface. Using a processor, the system may then execute a validation test associated with the program update, wherein the validation test is conducted in a testing environment comprising a plurality of testing environment systems. The system may then use the processor to capture a current state of the testing environment at a start of the validation test, and confirm that the plurality of testing environment systems are operating according to the validation test. The system may then use the interface to receive testing results from the validation test and compare the testing results to previous test results from a prior program update. The system may then store the validation test results, the current state of the testing environment, and a name of the program update, in a performance database.
    Type: Application
    Filed: August 24, 2015
    Publication date: March 2, 2017
    Inventors: Steve C. Younger, Harshal L. Jambusaria, Mark O. Carter, Bharat Kumar Bathula, Abbner Uriel Torres Ramos
  • Publication number: 20170060729
    Abstract: A system and method for facilitating characterizing customized computing objects of a software application, such as a networked enterprise application. An example method includes identifying one or more custom computing objects of one or more software applications of a computing environment; determining one or more grouping criteria for grouping identified custom objects; grouping information pertaining to the one or more custom objects based on the one or more grouping criteria, resulting in one or more custom object groupings; and using the one or more custom object groupings, with reference to data characterizing one or more changes slated to be made to the software application, to generate one or more user interface display screens. In a more specific embodiment, the data characterizing one or more changes includes metadata characterizing core software application maintenance events, upgrades, and/or other modifications.
    Type: Application
    Filed: August 25, 2015
    Publication date: March 2, 2017
    Inventors: Shamus Kahl, Nathan Rooney, Stephen J. Willson, Rahul Jain, Saumyaranjan Acharya, Stephen Persky, Ankit Kapil
  • Publication number: 20170060730
    Abstract: Provided are techniques for auto-generating Representational State Transfer (REST) services for quality assurance. One or more test cases and artifacts are received for a project. A test Representational State Transfer (REST) service is generated for the project using the one or more test cases and the artifacts. The test REST service is deployed on an application server for use in testing features of a REST service client application.
    Type: Application
    Filed: August 28, 2015
    Publication date: March 2, 2017
    Inventors: Jeff J. Li, Wendi L. Nusbickel, Suraj R. Patel, Deepa R. Yarangatta
  • Publication number: 20170060731
    Abstract: Methods and systems for dynamically providing application analytic information are provided herein. The method includes inserting instrumentation points into an application file via an application analytic service and dynamically determining desired instrumentation points from which to collect application analytic data. The method also includes receiving, at the application analytic service, the application analytic data corresponding to the desired instrumentation points and analyzing the application analytic data to generate application analytic information. The method further includes sending the application analytic information to a client computing device.
    Type: Application
    Filed: November 10, 2016
    Publication date: March 2, 2017
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Lenin Ravindranath Sivalingam, Jitendra Padhye, Ian Obermiller, Ratul Mahajan, Sharad Agarwal, Ronnie Ira Chaiken, Shahin Shayandeh, Christopher M. Moore, Sirius Kuttiyan
  • Publication number: 20170060732
    Abstract: Techniques for automated bug detection. A set of inputs are collected and a snapshotting feature is used to apply each input to a test application. Outputs from the test application are gathered and compared to determine whether the outputs are associated with bugs. Comparison can be done with one or more of many different techniques that quantify difference between outputs associated with test inputs and outputs associated with a “happy path input.” Outputs can be grouped together based on these quantifications and the groups can be used to identify outputs most likely to be associated with bugs. The output groups may also be used to group associated inputs to the set of inputs to be used for testing in the future. When a bug is identified, a report could be automatically generated that includes a scoring value as well as recorded output information and could be presented to a user.
    Type: Application
    Filed: August 31, 2015
    Publication date: March 2, 2017
    Inventor: Marcello GOLFIERI
  • Publication number: 20170060733
    Abstract: A method of determining code coverage of an application by test(s) (wherein code may include UI elements). The method comprises executing a code test on at least one portion of a tested code of an application in one or more iterations, wherein in each one of the plurality of iterations, selecting at least one of a plurality of atomic code elements of the tested code, applying the code test on a version of the code that does not include the at least one selected atomic element, and classifying the at least one selected atomic element as covered by the code test when the code test fails and as not covered in the code test when the code test passes.
    Type: Application
    Filed: August 31, 2015
    Publication date: March 2, 2017
    Inventors: Aharon Abadi, Moria Abadi, Idan Ben-Harrush, Samuel Kallner
  • Publication number: 20170060734
    Abstract: A method, product and apparatus for creating functional model of test cases. The method comprising obtaining a set of test cases, wherein each test case of the set of test cases comprises free-text; defining one or more tags, wherein each tag of the one or more tags is associated with a query that is configured, when applied, to determine possession of the tag with respect to a test case based on the free-text; applying the queries on the set of test cases to determine possession of the of the one or more tags for each test case; and generating a functional model based on the set of test cases, wherein the functional model comprising for each tag of the one or more tags, a corresponding functional attribute.
    Type: Application
    Filed: August 30, 2015
    Publication date: March 2, 2017
    Inventors: Orna RAZ, Randall L. Tackett, Paul A. Wojciak, Marcel Zalmanovici, Aviad Zlotnick
  • Publication number: 20170060735
    Abstract: According to an aspect of an embodiment, one or more systems or methods may be configured to locate a fault in a software program using a test suite. The systems or methods may be further configured to modify, using a repair template, the software program in response to locating the fault. In addition, the systems or methods may be configured to determine whether the modification satisfies an anti-pattern condition. The anti-pattern condition may indicate whether the modification is improper. The systems or methods may also be configured to disallow the modification in response to the modification satisfying the anti-pattern condition or perform further testing on the software program, as modified, in response to the modification not satisfying the anti-pattern condition.
    Type: Application
    Filed: August 25, 2015
    Publication date: March 2, 2017
    Inventors: Hiroaki YOSHIDA, Shin Hwei TAN, Mukul R. PRASAD
  • Publication number: 20170060736
    Abstract: Methods and apparatuses pertaining to dynamic memory sharing may involve sharing a first portion of a memory associated with a first module for use by a second module. The first portion of the memory may be reclaimed for use by the first module in real time upon a determination that there is an increase in demand for the memory by the first module that requires reclamation, such that the first module begins to use the first portion of the memory before the second module finishes a process of aborting to use the first portion of the first memory.
    Type: Application
    Filed: November 14, 2016
    Publication date: March 2, 2017
    Inventors: Chien-Liang Lin, Jing-Yen Huang, Peng-An Chen, Nicholas Ching Hui Tang, Chung-Jung Lee, Chin-Wen Chang
  • Publication number: 20170060737
    Abstract: A plurality of memory allocators are initialized within a computing system. At least a first memory allocator and a second memory allocator in the plurality of memory allocators are each customizable to efficiently handle a set of different memory request size distributions. The first memory allocator is configured to handle a first memory request size distribution. The second memory allocator is configured to handle a second memory request size distribution. The second memory request size distribution is different than the first memory request size distribution. At least the first memory allocator and the second memory allocator that have been configured are deployed within the computing system in support of at least one application. Deploying at least the first memory allocator and the second memory allocator within the computing system improves at least one of performance and memory utilization of the at least one application.
    Type: Application
    Filed: November 4, 2016
    Publication date: March 2, 2017
    Applicant: International Business Machines Corporation
    Inventor: Arun IYENGAR
  • Publication number: 20170060738
    Abstract: A memory system and method are provided for performing garbage collection on blocks based on their obsolescence patterns. In one embodiment, a controller of a memory system classifies each of the plurality of blocks based on its obsolescence pattern and performs garbage collection only on blocks classified with similar obsolescence patterns. Other embodiments are possible, and each of the embodiments can be used alone or together in combination.
    Type: Application
    Filed: August 25, 2015
    Publication date: March 2, 2017
    Applicant: SanDisk Technologies Inc.
    Inventors: Amir Shaharabany, Hadas Oshinsky, Rotem Sela
  • Publication number: 20170060739
    Abstract: A dispersed storage and task network (DSTN) includes a site housing current distributed storage and task (DST) execution units. A determination is made to add new DST execution units to the site. A first address range assigned to the plurality of current DST execution units is obtained, and a common magnitude of second address ranges to be assigned to each of the new DST execution units and the current DST execution units is determined based, at least in part, on the first address range. Insertion points for each of the plurality of new DST execution units are determined, and transfer address ranges are determined in accordance with the insertion points. Transfer address ranges correspond to at least the part of the first address ranges to be transferred to the new DST execution units. Address range assignments are transferred from particular current DST execution units to particular new DST execution units.
    Type: Application
    Filed: November 9, 2016
    Publication date: March 2, 2017
    Inventors: Andrew D. Baptist, Greg R. Dhuse, Manish Motwani, Ilya Volvovski
  • Publication number: 20170060740
    Abstract: Embodiments provide adaptive storage management for optimizing multitier data storage. A storage manager may interact with storage decision advisors. The manager may adaptively make storage management decisions (e.g., flush, evict, recall, delete) after considering recommendations from and the credibility of the storage decision advisors. The manager may update the credibility of storage decision advisors based on how their recommendations affected optimization. The manager may adaptively choose when to rebalance or reconfigure the credibility of the storage decision advisors. Storage decision advisors may themselves be adaptive. Storage decision advisors may examine credibility feedback from the storage manager to determine which recommendations were useful and which were not. Storage decision advisors may then change when they will make a recommendation, when they wifi abstain from making a recommendation, the type of recommendation provided, or other behavior.
    Type: Application
    Filed: September 1, 2015
    Publication date: March 2, 2017
    Inventor: Don Doerner
  • Publication number: 20170060741
    Abstract: A capture service running on an application server receives events from a client application running on an application server to be stored in a data store and stores the events in an in-memory bounded buffer on the application server, the in-memory bounded buffer comprising a plurality of single-threaded segments, the capture service to write events to each segment in parallel. The in-memory bounded buffer provides a notification to a buffer flush regulator when a number of events stored in the in-memory bounded buffer reaches a predefined limit. The in-memory bounded buffer receive a request to flush the events in the in-memory bounded buffer from a consumer executor service. The consumer executor service consumes the events in the in-memory bounded buffer using a dynamically sized thread pool of consumer threads to read the segments of the bounded buffer in parallel, wherein consuming the events comprises writing the events directly to the data store.
    Type: Application
    Filed: August 12, 2016
    Publication date: March 2, 2017
    Inventors: Aakash Pradeep, Adam Torman, Alex Warshavsky, Samarpan Jain, Soumen Bandyopadhyay, Thomas William D'Silva, Abhishek Bangalore Sreenivasa
  • Publication number: 20170060742
    Abstract: A machine-implemented method for controlling transfer of at least one data item from a data cache component, in communication with storage using at least one relatively higher-latency path and at least one relatively lower-latency path, comprises: receiving metadata defining at least a first characteristic of data selected for inspection; responsive to the metadata, seeking a match between said at least first characteristic and a second characteristic of at least one of a plurality of data items in the data cache component; selecting said at least one of the plurality of data items where the at least one of the plurality of data items has the second characteristic matching the first characteristic; and passing the selected one of the plurality of data items from the data cache component using the relatively lower-latency path.
    Type: Application
    Filed: August 22, 2016
    Publication date: March 2, 2017
    Applicant: Metaswitch Networks Ltd
    Inventors: Jim Wilkinson, Jonathan Lawn
  • Publication number: 20170060743
    Abstract: A method and apparatus of a device that manages virtual memory for a graphics processing unit is described. In an exemplary embodiment, the device manages a graphics processing unit working set of pages. In this embodiment, the device determines the set of pages of the device to be analyzed, where the device includes a central processing unit and the graphics processing unit. The device additionally classifies the set of pages based on a graphics processing unit activity associated with the set of pages and evicts a page of the set of pages based on the classifying.
    Type: Application
    Filed: October 28, 2016
    Publication date: March 2, 2017
    Inventor: Derek R. Kumar
  • Publication number: 20170060744
    Abstract: According to one embodiment, a tiered storage system includes a tiered storage device and a computer. The computer uses the tiered storage device, and includes a file system and a correction support unit. If an access request from an application is a write request to request overwriting of data, the file system executes a copy-on-write operation. The correction support unit causes the storage controller to carry over an access count manacled by the storage controller and associated with the logical block address of a copy source in the copy-on-write operation, to an access count associated with the logical block address of a copy destination in the copy-on-write operation.
    Type: Application
    Filed: September 14, 2016
    Publication date: March 2, 2017
    Inventors: Shouhei Saitou, Shinya Ando
  • Publication number: 20170060745
    Abstract: A directory structure that may allow concurrent processing of write-back and clean victimization requests is disclosed. The directory structure may include a memory configured to store a plurality of entries, where each entry may include information indicative of a status of a respective entry in a cache memory. Update requests for the entries in the memory may be received and stored. A subset of previously stored update requests may be selected. Each update request of the subset of the previously stored update requests may then be processed concurrently.
    Type: Application
    Filed: August 25, 2015
    Publication date: March 2, 2017
    Inventors: Thomas Wicki, Jurgen Schulz, Paul Loewenstein
  • Publication number: 20170060746
    Abstract: In at least some embodiments, a processor core generates a store operation by executing a store instruction in an instruction sequence. The store operation is marked as a high priority store operation operation in response to detecting, in the instruction sequence, a barrier instruction that precedes the store instruction in program order and that includes a field set to indicate the store operation should be accorded high priority and is not so marked otherwise. The store operation is buffered in a store queue associated with a cache memory of the processor core. Handling of the store operation in the store queue is expedited in response to the store operation being marked as a high priority store operation and not expedited otherwise.
    Type: Application
    Filed: August 28, 2015
    Publication date: March 2, 2017
    Inventors: GUY L. GUTHRIE, HUGH SHEN, JEFFREY A. STUECHELI, DEREK E. WILLIAMS
  • Publication number: 20170060747
    Abstract: A method for increasing storage space in a system containing a block data storage device, a memory, and a processor is provided. Generally, the processor is configured by the memory to tag metadata of a data block of the block storage device indicating the block as free, used, or semifree. The free tag indicates the data block is available to the system for storing data when needed, the used tag indicates the data block contains application data, and the semifree tag indicates the data block contains cache data and is available to the system for storing application data type if no blocks marked with the free tag are available to the system.
    Type: Application
    Filed: November 13, 2016
    Publication date: March 2, 2017
    Inventors: Derry Shribman, Ofer Vilenski
  • Publication number: 20170060748
    Abstract: A processor includes a cache memory, an issuing unit that issues, with respect to all element data as a processing object of a load instruction, a cache request to the cache memory for each of a plurality of groups which are divided to include element data, a comparing unit that compares addresses of the element data as the processing object of the load instruction, and determines whether element data in a same group are simultaneously accessible, and a control unit that accesses the cache memory according to the cache request registered in a load queue registering one or more cache requests issued from the issuing unit. The control unit processes by one access whole element data determined to be simultaneously accessible by the comparing unit.
    Type: Application
    Filed: July 27, 2016
    Publication date: March 2, 2017
    Applicant: FUJITSU LIMITED
    Inventors: Hideki Okawara, Noriko Takagi, YASUNOBU AKIZUKI, Kenichi Kitamura, Mikio Hondo
  • Publication number: 20170060749
    Abstract: A system and method that allows idle process logic blocks in a memory device to be utilized when the idle process logic blocks would otherwise be remaining idle as the current memory commands are executed. Utilizing the otherwise idle process logic blocks in the memory device allows more optimized use of the process logic blocks while not slowing or otherwise interfering with the execution of the current memory commands. The otherwise idle process logic blocks can perform additional operations for subsequently fetched memory commands that may otherwise cause delays in execution of the subsequently fetched memory commands.
    Type: Application
    Filed: August 31, 2015
    Publication date: March 2, 2017
    Inventors: Amir Segev, Shay Benisty
  • Publication number: 20170060750
    Abstract: Method and apparatus for cache way prediction using a plurality of partial tags are provided. In a cache-block address comprising a plurality of sets and a plurality of ways or lines, one of the sets is selected for indexing, and a plurality of distinct partial tags are identified for the selected set. A determination is made as to whether a partial tag for a new line collides with any of the partial tags for current resident lines in the selected set. If the partial tag for the new line does not collide with any of the partial tags for the current resident lines, then there is no aliasing. If the partial tag for the new line collides with any of the partial tags for the current resident lines, then aliasing may be avoided by reading the full tag array and updating the partial tags.
    Type: Application
    Filed: September 2, 2015
    Publication date: March 2, 2017
    Inventors: Anil KRISHNA, Gregory Michael WRIGHT, Derek Robert HOWER
  • Publication number: 20170060751
    Abstract: A processor includes an instruction executing unit, which executes a memory access instruction, a cache memory unit, disposed between a main memory which stores data related, to the memory access instruction and the instruction executing unit, a control information retaining unit which retains control information related to a prefetch issued to the cache memory unit, an address information retaining unit which retains address information based on the memory access instruction executed in the past, and a control unit which generates and issues a hardware prefetch request. The control unit compares address information retained in the address information retaining unit and an access address in the memory access instruction executed, and generates and issues based on a comparison result a hardware prefetch request to the cache memory unit according to the control information of the control information retaining unit, specified by specifying information added to the memory access instruction.
    Type: Application
    Filed: July 29, 2016
    Publication date: March 2, 2017
    Applicant: FUJITSU LIMITED
    Inventors: Hideki Okawara, Masatoshi Haraguchi
  • Publication number: 20170060752
    Abstract: A data caching method and a computer system are provided. In the method, when a miss of an access request occurs and a cache needs to determine a to-be-replaced cache line, not only a historical access frequency of the cache line but also a type of a memory corresponding to the cache line needs to be considered. A cache line corresponding to a DRAM type may be preferably replaced, which reduces a caching amount in the cache for data stored in a DRAM and relatively increase a caching amount for data stored in an NVM., For an access request for accessing the data stored in the NVM, corresponding data can be found in the cache whenever possible, thereby reducing cases of reading data from the NVM. Thus, a delay in reading data from the NVM is reduced, and access efficiency is effectively improved.
    Type: Application
    Filed: November 9, 2016
    Publication date: March 2, 2017
    Applicant: HUAWEI TECHNOLOGIES CO.,LTD.
    Inventors: Wei Wei, Lixin Zhang, Jin Xiong, Dejun Jiang
  • Publication number: 20170060753
    Abstract: Presented herein are methods, non-transitory computer readable media, and devices for integrating a workload management scheme for a file system buffer cache with a global recycle queue infrastructure.
    Type: Application
    Filed: August 28, 2015
    Publication date: March 2, 2017
    Inventors: Peter DENZ, Matthew Curtis-Maury, Peter Wyckoff
  • Publication number: 20170060754
    Abstract: According to one general aspect, an apparatus may include a store stream detector configured to detect when the apparatus is streaming data to a memory system. The apparatus may also include a write generator configured to route a stream of data to either a near memory of the memory system or a far memory of the memory system based upon a cache threshold value and a size of the stream of data. The apparatus may be configured to dynamically vary the cache threshold value based upon a predetermined rule set, such that cache pollution caused by the stream of data is managed.
    Type: Application
    Filed: January 13, 2016
    Publication date: March 2, 2017
    Inventors: Tarun NAKRA, Kevin LEPAK
  • Publication number: 20170060755
    Abstract: A system includes a memory, a cache including multiple cache lines; and a processor. The processor may be configured to retrieve, from a first cache line, a first instruction to store data in a memory location at an address in the memory. The processor may be configured to retrieve, from a second cache line, a second instruction to read the memory location at the address in the memory. The second instruction may be retrieved after the first instruction. The processor may be configured to execute the second instruction at a first time dependent upon a value of a first entry in a table, wherein the first entry is selected dependent upon a value in the second cache line. The processor may be configured to execute the first instruction at a second time, after the first time, and invalidate the second instruction at a third time, after the second time.
    Type: Application
    Filed: August 28, 2015
    Publication date: March 2, 2017
    Inventor: Yuan Chou
  • Publication number: 20170060756
    Abstract: In at least some embodiments, a processor core generates a store operation by executing a store instruction in an instruction sequence. The store operation is marked as a high priority store operation in response to the store instruction being marked as high priority and is not so marked otherwise. The store operation is buffered in a store queue associated with a cache memory of the processor core. Handling of the store operation in the store queue is expedited in response to the store operation being marked as a high priority store operation and not expedited otherwise.
    Type: Application
    Filed: August 28, 2015
    Publication date: March 2, 2017
    Inventors: GUY L. GUTHRIE, HUGH SHEN, JEFFREY A. STUECHELI, DEREK E. WILLIAMS
  • Publication number: 20170060757
    Abstract: In at least some embodiments, a processor core generates a store operation by executing a store instruction in an instruction sequence. The store operation is marked as a high priority store operation operation in response to detecting a barrier instruction in the instruction sequence immediately preceding the store instruction in program order and is not so marked otherwise. The store operation is buffered in a store queue associated with a cache memory of the processor core. Handling of the store operation in the store queue is expedited in response to the store operation being marked as a high priority store operation and not expedited otherwise.
    Type: Application
    Filed: August 28, 2015
    Publication date: March 2, 2017
    Inventors: GUY L. GUTHRIE, HUGH SHEN, JEFFREY A. STUECHELI, DEREK E. WILLIAMS
  • Publication number: 20170060758
    Abstract: In at least some embodiments, a processor core generates one or more store operations by executing one or more store instructions in an instruction sequence. The one or more store operations are marked as a high priority store operations in response to detecting, in the instruction sequence, a window opening instruction and a window closing instruction bounding the one or more store instructions and are not so marked otherwise. The one or more store operations are buffered in a store queue associated with a cache memory of the processor core. Handling of the one or more store operations in the store queue is expedited in response to the one or more store operations being marked as high priority store operations and not expedited otherwise.
    Type: Application
    Filed: August 28, 2015
    Publication date: March 2, 2017
    Inventors: GUY L. GUTHRIE, HUGH SHEN, JEFFREY A. STUECHELI, DEREK E. WILLIAMS
  • Publication number: 20170060759
    Abstract: In at least some embodiments, a processor core generates a store operation by executing a store instruction in an instruction sequence. The store operation is marked as a high priority store operation in response to the store instruction being marked as high priority and is not so marked otherwise. The store operation is buffered in a store queue associated with a cache memory of the processor core. Handling of the store operation in the store queue is expedited in response to the store operation being marked as a high priority store operation and not expedited otherwise.
    Type: Application
    Filed: September 30, 2015
    Publication date: March 2, 2017
    Inventors: GUY L. GUTHRIE, HUGH SHEN, JEFFREY A. STUECHELI, DEREK E. WILLIAMS
  • Publication number: 20170060760
    Abstract: In at least some embodiments, a processor core generates a store operation by executing a store instruction in an instruction sequence. The store operation is marked as a high priority store operation operation in response to detecting a barrier instruction in the instruction sequence immediately preceding the store instruction in program order and is not so marked otherwise. The store operation is buffered in a store queue associated with a cache memory of the processor core. Handling of the store operation in the store queue is expedited in response to the store operation being marked as a high priority store operation and not expedited otherwise.
    Type: Application
    Filed: September 30, 2015
    Publication date: March 2, 2017
    Inventors: GUY L. GUTHRIE, HUGH SHEN, JEFFREY A. STUECHELI, DEREK E. WILLIAMS
  • Publication number: 20170060761
    Abstract: In at least some embodiments, a processor core generates a store operation by executing a store instruction in an instruction sequence. The store operation is marked as a high priority store operation operation in response to detecting, in the instruction sequence, a barrier instruction that precedes the store instruction in program order and that includes a field set to indicate the store operation should be accorded high priority and is not so marked otherwise. The store operation is buffered in a store queue associated with a cache memory of the processor core. Handling of the store operation in the store queue is expedited in response to the store operation being marked as a high priority store operation and not expedited otherwise.
    Type: Application
    Filed: September 30, 2015
    Publication date: March 2, 2017
    Inventors: GUY L. GUTHRIE, HUGH SHEN, JEFFREY A. STUECHELI, DEREK E. WILLIAMS
  • Publication number: 20170060762
    Abstract: In at least some embodiments, a processor core generates one or more store operations by executing one or more store instructions in an instruction sequence. The one or more store operations are marked as a high priority store operations in response to detecting, in the instruction sequence, a window opening instruction and a window closing instruction bounding the one or more store instructions and are not so marked otherwise. The one or more store operations are buffered in a store queue associated with a cache memory of the processor core. Handling of the one or more store operations in the store queue is expedited in response to the one or more store operations being marked as high priority store operations and not expedited otherwise.
    Type: Application
    Filed: September 30, 2015
    Publication date: March 2, 2017
    Inventors: GUY L. GUTHRIE, HUGH SHEN, JEFFREY A. STUECHELI, DEREK E. WILLIAMS
  • Publication number: 20170060763
    Abstract: A storage device includes a flash memory-based cache for a hard disk-based storage device and a controller that is configured to limit the rate of cache updates through a variety of mechanisms, including determinations that the data is not likely to be read back from the storage device within a time period that justifies its storage in the cache, compressing data prior to its storage in the cache, precluding storage of sequentially-accessed data in the cache, and/or throttling storage of data to the cache within predetermined write periods and/or according to user instruction.
    Type: Application
    Filed: November 10, 2016
    Publication date: March 2, 2017
    Inventors: Umesh Maheshwari, Varun Mehta
  • Publication number: 20170060764
    Abstract: Methods and systems are presented for evicting or copying-forward blocks in a storage system during garbage collection. In one method, a block status is maintained in a first memory to identify if the block is active or inactive, blocks being stored in segments that are configured to be cacheable in a second memory, a read-cache memory. Whenever an operation on a block is detected making the block inactive in one volume, the system determines if the block is still active in any volume, the block being cached in a first segment in the second memory. When the system detects that the first segment is being evicted from the second memory, the system re-caches the block into a second segment in the second memory if the block status of the block is active and the frequency of access to the block is above a predetermined value.
    Type: Application
    Filed: February 9, 2016
    Publication date: March 2, 2017
    Inventors: Pradeep Shetty, Senthil Kumar Ramamoorthy, Umesh Maheshwari, Vanco Buca
  • Publication number: 20170060765
    Abstract: A computer system provides a mechanism for assuring a safe, non-preemptible access to a private data area (PRDA) belonging to a CPU. PRDA accesses generally include obtaining an address of a PRDA and performing operations on the PRDA using the obtained address. Safe, non-preemptible access to a PRDA generally ensures that a context accesses the PRDA of the CPU on which the context is executing, but not the PRDA of another CPU. While a context executes on a first CPU, the context obtains the address of the PRDA. After the context is migrated to a second CPU, the context performs one or more operations on the PRDA belonging to the second CPU using the address obtained while the context executed on the first CPU. In another embodiment, preemption and possible migration of a context from one CPU to another CPU is delayed while a context executes non-preemptible code.
    Type: Application
    Filed: August 28, 2015
    Publication date: March 2, 2017
    Inventors: Cyprien LAPLACE, Harvey TUCH, Andrei WARKENTIN, Adrian DRZEWIECKI
  • Publication number: 20170060766
    Abstract: A method for storing service level agreement (“SLA”) compliance data includes reserving a memory location to store SLA compliance data of a software thread. The method includes directing the software thread to run on a selected hardware device. The method includes enabling SLA compliance data to be stored in the memory location. The SLA compliance data is from a hardware counting device in communication with the selected hardware device. The SLA compliance data corresponds to operation of the software thread on the selected hardware device.
    Type: Application
    Filed: September 24, 2015
    Publication date: March 2, 2017
    Inventors: RAJARSHI DAS, AARON C. SAWDEY, PHILIP L. VITALE
  • Publication number: 20170060767
    Abstract: An I/O DMA address may be translated for a flexible number of entries in a translation validation table (TVT) for a partitionable endpoint number, when a particular entry in the TVT is accessed based on the partitionable endpoint number. A presence of an extended mode bit can be detected in a particular TVT entry. Based on the presence of the extended mode bit, an entry in the extended TVT can be accessed and used to translate the I/O DMA address to a physical address.
    Type: Application
    Filed: September 29, 2015
    Publication date: March 2, 2017
    Inventors: Jesse P. Arroyo, Rama K. Hazari, Sakethan R. Kotta, Kumaraswamy Sripathy
  • Publication number: 20170060768
    Abstract: Techniques and systems are provided for tracking commands. Such methods and systems can include maintaining a meta page in a volatile memory to track commands. The meta page can comprise information associated with a non-volatile memory superblock. When an invalidation command is received for a first logical address, the first logical address can be stored along with an indication that the data associated with the first logical address is invalid, in a first location in the meta page.
    Type: Application
    Filed: February 2, 2016
    Publication date: March 2, 2017
    Inventors: Fan Zhang, Taeil Um, Yu Cai
  • Publication number: 20170060769
    Abstract: Described are various embodiments of systems, devices and methods for generating locality-indicative data representations of data streams, and compressions thereof. In one such embodiment, a method is provided for determining an indication of locality of data elements in a data stream communicated over a communication medium. This method comprises determining, for at least two sample times, count values of distinct values for each of at least two distinct value counters, wherein each of the distinct value counters has a unique starting time; and comparing corresponding count values for at least two of the distinct value counters to determine an indication of locality of data elements in the data stream at one of the sample times.
    Type: Application
    Filed: April 29, 2015
    Publication date: March 2, 2017
    Inventors: Jacob Taylor Wires, Andrew Warfield, Stephen Frowe Ingram, Nicholas James Alexander Harvey, Zachary Drudi
  • Publication number: 20170060770
    Abstract: An I/O DMA address may be translated for a flexible number of entries in a translation validation table (TVT) for a partitionable endpoint number, when a particular entry in the TVT is accessed based on the partitionable endpoint number. A presence of an extended mode bit can be detected in a particular TVT entry. Based on the presence of the extended mode bit, an entry in the extended TVT can be accessed and used to translate the I/O DMA address to a physical address.
    Type: Application
    Filed: September 1, 2015
    Publication date: March 2, 2017
    Inventors: Jesse P. Arroyo, Rama K. Hazari, Sakethan R. Kotta, Kumaraswamy Sripathy
  • Publication number: 20170060771
    Abstract: In an example, a processing system of a database system may categorize event data taken from logged interactions of users with a multi-tenant information system to provide a metric. Event roll-up aggregate metrics used to provide the metric may be generated in connection with event capture. The processing system of the database system may periodically calculate the metric for a particular one of the tenants, and electronically store the periodically calculated metrics for accessing responsive to a query of the particular tenant.
    Type: Application
    Filed: August 31, 2015
    Publication date: March 2, 2017
    Inventors: Aakash Pradeep, Adam Torman, Samarpan Jain, Alex Warshavsky
  • Publication number: 20170060772
    Abstract: Techniques are provided for maintaining data persistently in one format, but making that data available to a database server in more than one format. For example, one of the formats in which the data is made available for query processing is based on the on-disk format, while another of the formats in which the data is made available for query processing is independent of the on-disk format. Data that is in the format that is independent of the disk format may be maintained exclusively in volatile memory to reduce the overhead associated with keeping the data in sync with the on-disk format copies of the data. Selection of data to be maintained in the volatile memory may be based on various factors. Once selected the data may also be compressed to save space in the volatile memory. The compression level may depend on one or more factors that are evaluated for the selected data.
    Type: Application
    Filed: August 31, 2015
    Publication date: March 2, 2017
    Inventors: CHINMAYI KRISHNAPPA, VINEET MARWAH, AMIT GANESH
  • Publication number: 20170060773
    Abstract: Embodiments of the present invention disclose a method and apparatus of cache management for a non-volatile storage device. The method embodiment includes: determining a size relationship between a capacity sum of a clean page subpool and a dirty page subpool and a cache capacity; determining, when the capacity sum is equal to the cache capacity, whether identification information of a to-be-accessed page is in a history list of clean pages or a history list of dirty pages; and when it is determined that the identification information of the to-be-accessed page is in the history list of clean pages, adding a first adjustment value to a clean page subpool capacity threshold; and when the identification information of the to-be-accessed page is in the history list of dirty pages, subtracting a second adjustment value from the clean page subpool capacity threshold.
    Type: Application
    Filed: November 10, 2016
    Publication date: March 2, 2017
    Applicant: HUAWEI TECHNOLOGIES CO.,LTD.
    Inventor: Junhua Zhu
  • Publication number: 20170060774
    Abstract: A storage control device includes a first memory, a second memory, and a processor. The processor is configured to store a reference count of each of a plurality of first and second unit data. The processor is configured to arrange first entries of first management information in a first memory area on the first memory. The first entries each include a hash value and information indicating where corresponding one of the first unit data is stored. The processor is configured to arrange second entries of second management information in a second memory area on the second memory. The second entries each include a hash value, information indicating where corresponding one of the second unit data is stored, and the reference count. The processor is configured to arrange, in a third memory area on the first memory, index information for filtering hash values included in the second entries.
    Type: Application
    Filed: August 23, 2016
    Publication date: March 2, 2017
    Applicant: FUJITSU LIMITED
    Inventors: Mikio ITO, Yuji Morita, Takako Kato, Osamu Kimura
  • Publication number: 20170060775
    Abstract: Methods of securely encrypting and decrypting data stored within computer readable memory of a device are described. Additionally, a memory encryption unit architecture (200) is described. A disclosed encryption method comprises the steps of: providing (122) a key; encrypting (126) the data stored in the computer readable memory using the key; generating (132) an authentication code based on parameters stored in the computer readable memory; wrapping (136) the key using the authentication code to generate a wrapped key; and storing the wrapped key in the computer readable memory (30), wherein the validity of the wrapped key is linked to the authenticity of the data stored in the computer readable memory. This prevents successful decryption in the event of execution of modified or malicious code that alters the data stored in the computer readable memory.
    Type: Application
    Filed: July 21, 2015
    Publication date: March 2, 2017
    Inventor: Hugues de Perthuis
  • Publication number: 20170060776
    Abstract: Described herein are various technologies pertaining to delivery of token-authenticated encrypted data. Content descriptor(s) (e.g., playlist(s)) can be modified to facilitate exchange of a token for a decryption key for browser(s) that do not provide logic to manage a flow of the token.
    Type: Application
    Filed: August 31, 2015
    Publication date: March 2, 2017
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Douglas Charles Shimonek, Dawei Wei, Steven C. Peterson, Mingfei Yan, Ashish Chawla, Vishal Sood, Quintin Swayne Burns