Patents Issued in March 2, 2017
-
Publication number: 20170060727Abstract: An on-chip system uses a time measurement circuit to trap code that takes longer than expected to execute by breaking code execution on excess time consumption.Type: ApplicationFiled: November 4, 2016Publication date: March 2, 2017Inventor: Ingar Hanssen
-
Publication number: 20170060728Abstract: In one embodiment, a system for program lifecycle testing includes receiving a request to test a program update at an interface. Using a processor, the system may then execute a validation test associated with the program update, wherein the validation test is conducted in a testing environment comprising a plurality of testing environment systems. The system may then use the processor to capture a current state of the testing environment at a start of the validation test, and confirm that the plurality of testing environment systems are operating according to the validation test. The system may then use the interface to receive testing results from the validation test and compare the testing results to previous test results from a prior program update. The system may then store the validation test results, the current state of the testing environment, and a name of the program update, in a performance database.Type: ApplicationFiled: August 24, 2015Publication date: March 2, 2017Inventors: Steve C. Younger, Harshal L. Jambusaria, Mark O. Carter, Bharat Kumar Bathula, Abbner Uriel Torres Ramos
-
Publication number: 20170060729Abstract: A system and method for facilitating characterizing customized computing objects of a software application, such as a networked enterprise application. An example method includes identifying one or more custom computing objects of one or more software applications of a computing environment; determining one or more grouping criteria for grouping identified custom objects; grouping information pertaining to the one or more custom objects based on the one or more grouping criteria, resulting in one or more custom object groupings; and using the one or more custom object groupings, with reference to data characterizing one or more changes slated to be made to the software application, to generate one or more user interface display screens. In a more specific embodiment, the data characterizing one or more changes includes metadata characterizing core software application maintenance events, upgrades, and/or other modifications.Type: ApplicationFiled: August 25, 2015Publication date: March 2, 2017Inventors: Shamus Kahl, Nathan Rooney, Stephen J. Willson, Rahul Jain, Saumyaranjan Acharya, Stephen Persky, Ankit Kapil
-
Publication number: 20170060730Abstract: Provided are techniques for auto-generating Representational State Transfer (REST) services for quality assurance. One or more test cases and artifacts are received for a project. A test Representational State Transfer (REST) service is generated for the project using the one or more test cases and the artifacts. The test REST service is deployed on an application server for use in testing features of a REST service client application.Type: ApplicationFiled: August 28, 2015Publication date: March 2, 2017Inventors: Jeff J. Li, Wendi L. Nusbickel, Suraj R. Patel, Deepa R. Yarangatta
-
Publication number: 20170060731Abstract: Methods and systems for dynamically providing application analytic information are provided herein. The method includes inserting instrumentation points into an application file via an application analytic service and dynamically determining desired instrumentation points from which to collect application analytic data. The method also includes receiving, at the application analytic service, the application analytic data corresponding to the desired instrumentation points and analyzing the application analytic data to generate application analytic information. The method further includes sending the application analytic information to a client computing device.Type: ApplicationFiled: November 10, 2016Publication date: March 2, 2017Applicant: Microsoft Technology Licensing, LLCInventors: Lenin Ravindranath Sivalingam, Jitendra Padhye, Ian Obermiller, Ratul Mahajan, Sharad Agarwal, Ronnie Ira Chaiken, Shahin Shayandeh, Christopher M. Moore, Sirius Kuttiyan
-
Publication number: 20170060732Abstract: Techniques for automated bug detection. A set of inputs are collected and a snapshotting feature is used to apply each input to a test application. Outputs from the test application are gathered and compared to determine whether the outputs are associated with bugs. Comparison can be done with one or more of many different techniques that quantify difference between outputs associated with test inputs and outputs associated with a “happy path input.” Outputs can be grouped together based on these quantifications and the groups can be used to identify outputs most likely to be associated with bugs. The output groups may also be used to group associated inputs to the set of inputs to be used for testing in the future. When a bug is identified, a report could be automatically generated that includes a scoring value as well as recorded output information and could be presented to a user.Type: ApplicationFiled: August 31, 2015Publication date: March 2, 2017Inventor: Marcello GOLFIERI
-
Publication number: 20170060733Abstract: A method of determining code coverage of an application by test(s) (wherein code may include UI elements). The method comprises executing a code test on at least one portion of a tested code of an application in one or more iterations, wherein in each one of the plurality of iterations, selecting at least one of a plurality of atomic code elements of the tested code, applying the code test on a version of the code that does not include the at least one selected atomic element, and classifying the at least one selected atomic element as covered by the code test when the code test fails and as not covered in the code test when the code test passes.Type: ApplicationFiled: August 31, 2015Publication date: March 2, 2017Inventors: Aharon Abadi, Moria Abadi, Idan Ben-Harrush, Samuel Kallner
-
Publication number: 20170060734Abstract: A method, product and apparatus for creating functional model of test cases. The method comprising obtaining a set of test cases, wherein each test case of the set of test cases comprises free-text; defining one or more tags, wherein each tag of the one or more tags is associated with a query that is configured, when applied, to determine possession of the tag with respect to a test case based on the free-text; applying the queries on the set of test cases to determine possession of the of the one or more tags for each test case; and generating a functional model based on the set of test cases, wherein the functional model comprising for each tag of the one or more tags, a corresponding functional attribute.Type: ApplicationFiled: August 30, 2015Publication date: March 2, 2017Inventors: Orna RAZ, Randall L. Tackett, Paul A. Wojciak, Marcel Zalmanovici, Aviad Zlotnick
-
Publication number: 20170060735Abstract: According to an aspect of an embodiment, one or more systems or methods may be configured to locate a fault in a software program using a test suite. The systems or methods may be further configured to modify, using a repair template, the software program in response to locating the fault. In addition, the systems or methods may be configured to determine whether the modification satisfies an anti-pattern condition. The anti-pattern condition may indicate whether the modification is improper. The systems or methods may also be configured to disallow the modification in response to the modification satisfying the anti-pattern condition or perform further testing on the software program, as modified, in response to the modification not satisfying the anti-pattern condition.Type: ApplicationFiled: August 25, 2015Publication date: March 2, 2017Inventors: Hiroaki YOSHIDA, Shin Hwei TAN, Mukul R. PRASAD
-
Publication number: 20170060736Abstract: Methods and apparatuses pertaining to dynamic memory sharing may involve sharing a first portion of a memory associated with a first module for use by a second module. The first portion of the memory may be reclaimed for use by the first module in real time upon a determination that there is an increase in demand for the memory by the first module that requires reclamation, such that the first module begins to use the first portion of the memory before the second module finishes a process of aborting to use the first portion of the first memory.Type: ApplicationFiled: November 14, 2016Publication date: March 2, 2017Inventors: Chien-Liang Lin, Jing-Yen Huang, Peng-An Chen, Nicholas Ching Hui Tang, Chung-Jung Lee, Chin-Wen Chang
-
Publication number: 20170060737Abstract: A plurality of memory allocators are initialized within a computing system. At least a first memory allocator and a second memory allocator in the plurality of memory allocators are each customizable to efficiently handle a set of different memory request size distributions. The first memory allocator is configured to handle a first memory request size distribution. The second memory allocator is configured to handle a second memory request size distribution. The second memory request size distribution is different than the first memory request size distribution. At least the first memory allocator and the second memory allocator that have been configured are deployed within the computing system in support of at least one application. Deploying at least the first memory allocator and the second memory allocator within the computing system improves at least one of performance and memory utilization of the at least one application.Type: ApplicationFiled: November 4, 2016Publication date: March 2, 2017Applicant: International Business Machines CorporationInventor: Arun IYENGAR
-
Publication number: 20170060738Abstract: A memory system and method are provided for performing garbage collection on blocks based on their obsolescence patterns. In one embodiment, a controller of a memory system classifies each of the plurality of blocks based on its obsolescence pattern and performs garbage collection only on blocks classified with similar obsolescence patterns. Other embodiments are possible, and each of the embodiments can be used alone or together in combination.Type: ApplicationFiled: August 25, 2015Publication date: March 2, 2017Applicant: SanDisk Technologies Inc.Inventors: Amir Shaharabany, Hadas Oshinsky, Rotem Sela
-
Publication number: 20170060739Abstract: A dispersed storage and task network (DSTN) includes a site housing current distributed storage and task (DST) execution units. A determination is made to add new DST execution units to the site. A first address range assigned to the plurality of current DST execution units is obtained, and a common magnitude of second address ranges to be assigned to each of the new DST execution units and the current DST execution units is determined based, at least in part, on the first address range. Insertion points for each of the plurality of new DST execution units are determined, and transfer address ranges are determined in accordance with the insertion points. Transfer address ranges correspond to at least the part of the first address ranges to be transferred to the new DST execution units. Address range assignments are transferred from particular current DST execution units to particular new DST execution units.Type: ApplicationFiled: November 9, 2016Publication date: March 2, 2017Inventors: Andrew D. Baptist, Greg R. Dhuse, Manish Motwani, Ilya Volvovski
-
Publication number: 20170060740Abstract: Embodiments provide adaptive storage management for optimizing multitier data storage. A storage manager may interact with storage decision advisors. The manager may adaptively make storage management decisions (e.g., flush, evict, recall, delete) after considering recommendations from and the credibility of the storage decision advisors. The manager may update the credibility of storage decision advisors based on how their recommendations affected optimization. The manager may adaptively choose when to rebalance or reconfigure the credibility of the storage decision advisors. Storage decision advisors may themselves be adaptive. Storage decision advisors may examine credibility feedback from the storage manager to determine which recommendations were useful and which were not. Storage decision advisors may then change when they will make a recommendation, when they wifi abstain from making a recommendation, the type of recommendation provided, or other behavior.Type: ApplicationFiled: September 1, 2015Publication date: March 2, 2017Inventor: Don Doerner
-
Publication number: 20170060741Abstract: A capture service running on an application server receives events from a client application running on an application server to be stored in a data store and stores the events in an in-memory bounded buffer on the application server, the in-memory bounded buffer comprising a plurality of single-threaded segments, the capture service to write events to each segment in parallel. The in-memory bounded buffer provides a notification to a buffer flush regulator when a number of events stored in the in-memory bounded buffer reaches a predefined limit. The in-memory bounded buffer receive a request to flush the events in the in-memory bounded buffer from a consumer executor service. The consumer executor service consumes the events in the in-memory bounded buffer using a dynamically sized thread pool of consumer threads to read the segments of the bounded buffer in parallel, wherein consuming the events comprises writing the events directly to the data store.Type: ApplicationFiled: August 12, 2016Publication date: March 2, 2017Inventors: Aakash Pradeep, Adam Torman, Alex Warshavsky, Samarpan Jain, Soumen Bandyopadhyay, Thomas William D'Silva, Abhishek Bangalore Sreenivasa
-
Publication number: 20170060742Abstract: A machine-implemented method for controlling transfer of at least one data item from a data cache component, in communication with storage using at least one relatively higher-latency path and at least one relatively lower-latency path, comprises: receiving metadata defining at least a first characteristic of data selected for inspection; responsive to the metadata, seeking a match between said at least first characteristic and a second characteristic of at least one of a plurality of data items in the data cache component; selecting said at least one of the plurality of data items where the at least one of the plurality of data items has the second characteristic matching the first characteristic; and passing the selected one of the plurality of data items from the data cache component using the relatively lower-latency path.Type: ApplicationFiled: August 22, 2016Publication date: March 2, 2017Applicant: Metaswitch Networks LtdInventors: Jim Wilkinson, Jonathan Lawn
-
Publication number: 20170060743Abstract: A method and apparatus of a device that manages virtual memory for a graphics processing unit is described. In an exemplary embodiment, the device manages a graphics processing unit working set of pages. In this embodiment, the device determines the set of pages of the device to be analyzed, where the device includes a central processing unit and the graphics processing unit. The device additionally classifies the set of pages based on a graphics processing unit activity associated with the set of pages and evicts a page of the set of pages based on the classifying.Type: ApplicationFiled: October 28, 2016Publication date: March 2, 2017Inventor: Derek R. Kumar
-
Publication number: 20170060744Abstract: According to one embodiment, a tiered storage system includes a tiered storage device and a computer. The computer uses the tiered storage device, and includes a file system and a correction support unit. If an access request from an application is a write request to request overwriting of data, the file system executes a copy-on-write operation. The correction support unit causes the storage controller to carry over an access count manacled by the storage controller and associated with the logical block address of a copy source in the copy-on-write operation, to an access count associated with the logical block address of a copy destination in the copy-on-write operation.Type: ApplicationFiled: September 14, 2016Publication date: March 2, 2017Inventors: Shouhei Saitou, Shinya Ando
-
Publication number: 20170060745Abstract: A directory structure that may allow concurrent processing of write-back and clean victimization requests is disclosed. The directory structure may include a memory configured to store a plurality of entries, where each entry may include information indicative of a status of a respective entry in a cache memory. Update requests for the entries in the memory may be received and stored. A subset of previously stored update requests may be selected. Each update request of the subset of the previously stored update requests may then be processed concurrently.Type: ApplicationFiled: August 25, 2015Publication date: March 2, 2017Inventors: Thomas Wicki, Jurgen Schulz, Paul Loewenstein
-
Publication number: 20170060746Abstract: In at least some embodiments, a processor core generates a store operation by executing a store instruction in an instruction sequence. The store operation is marked as a high priority store operation operation in response to detecting, in the instruction sequence, a barrier instruction that precedes the store instruction in program order and that includes a field set to indicate the store operation should be accorded high priority and is not so marked otherwise. The store operation is buffered in a store queue associated with a cache memory of the processor core. Handling of the store operation in the store queue is expedited in response to the store operation being marked as a high priority store operation and not expedited otherwise.Type: ApplicationFiled: August 28, 2015Publication date: March 2, 2017Inventors: GUY L. GUTHRIE, HUGH SHEN, JEFFREY A. STUECHELI, DEREK E. WILLIAMS
-
Publication number: 20170060747Abstract: A method for increasing storage space in a system containing a block data storage device, a memory, and a processor is provided. Generally, the processor is configured by the memory to tag metadata of a data block of the block storage device indicating the block as free, used, or semifree. The free tag indicates the data block is available to the system for storing data when needed, the used tag indicates the data block contains application data, and the semifree tag indicates the data block contains cache data and is available to the system for storing application data type if no blocks marked with the free tag are available to the system.Type: ApplicationFiled: November 13, 2016Publication date: March 2, 2017Inventors: Derry Shribman, Ofer Vilenski
-
Publication number: 20170060748Abstract: A processor includes a cache memory, an issuing unit that issues, with respect to all element data as a processing object of a load instruction, a cache request to the cache memory for each of a plurality of groups which are divided to include element data, a comparing unit that compares addresses of the element data as the processing object of the load instruction, and determines whether element data in a same group are simultaneously accessible, and a control unit that accesses the cache memory according to the cache request registered in a load queue registering one or more cache requests issued from the issuing unit. The control unit processes by one access whole element data determined to be simultaneously accessible by the comparing unit.Type: ApplicationFiled: July 27, 2016Publication date: March 2, 2017Applicant: FUJITSU LIMITEDInventors: Hideki Okawara, Noriko Takagi, YASUNOBU AKIZUKI, Kenichi Kitamura, Mikio Hondo
-
Publication number: 20170060749Abstract: A system and method that allows idle process logic blocks in a memory device to be utilized when the idle process logic blocks would otherwise be remaining idle as the current memory commands are executed. Utilizing the otherwise idle process logic blocks in the memory device allows more optimized use of the process logic blocks while not slowing or otherwise interfering with the execution of the current memory commands. The otherwise idle process logic blocks can perform additional operations for subsequently fetched memory commands that may otherwise cause delays in execution of the subsequently fetched memory commands.Type: ApplicationFiled: August 31, 2015Publication date: March 2, 2017Inventors: Amir Segev, Shay Benisty
-
Publication number: 20170060750Abstract: Method and apparatus for cache way prediction using a plurality of partial tags are provided. In a cache-block address comprising a plurality of sets and a plurality of ways or lines, one of the sets is selected for indexing, and a plurality of distinct partial tags are identified for the selected set. A determination is made as to whether a partial tag for a new line collides with any of the partial tags for current resident lines in the selected set. If the partial tag for the new line does not collide with any of the partial tags for the current resident lines, then there is no aliasing. If the partial tag for the new line collides with any of the partial tags for the current resident lines, then aliasing may be avoided by reading the full tag array and updating the partial tags.Type: ApplicationFiled: September 2, 2015Publication date: March 2, 2017Inventors: Anil KRISHNA, Gregory Michael WRIGHT, Derek Robert HOWER
-
Publication number: 20170060751Abstract: A processor includes an instruction executing unit, which executes a memory access instruction, a cache memory unit, disposed between a main memory which stores data related, to the memory access instruction and the instruction executing unit, a control information retaining unit which retains control information related to a prefetch issued to the cache memory unit, an address information retaining unit which retains address information based on the memory access instruction executed in the past, and a control unit which generates and issues a hardware prefetch request. The control unit compares address information retained in the address information retaining unit and an access address in the memory access instruction executed, and generates and issues based on a comparison result a hardware prefetch request to the cache memory unit according to the control information of the control information retaining unit, specified by specifying information added to the memory access instruction.Type: ApplicationFiled: July 29, 2016Publication date: March 2, 2017Applicant: FUJITSU LIMITEDInventors: Hideki Okawara, Masatoshi Haraguchi
-
Publication number: 20170060752Abstract: A data caching method and a computer system are provided. In the method, when a miss of an access request occurs and a cache needs to determine a to-be-replaced cache line, not only a historical access frequency of the cache line but also a type of a memory corresponding to the cache line needs to be considered. A cache line corresponding to a DRAM type may be preferably replaced, which reduces a caching amount in the cache for data stored in a DRAM and relatively increase a caching amount for data stored in an NVM., For an access request for accessing the data stored in the NVM, corresponding data can be found in the cache whenever possible, thereby reducing cases of reading data from the NVM. Thus, a delay in reading data from the NVM is reduced, and access efficiency is effectively improved.Type: ApplicationFiled: November 9, 2016Publication date: March 2, 2017Applicant: HUAWEI TECHNOLOGIES CO.,LTD.Inventors: Wei Wei, Lixin Zhang, Jin Xiong, Dejun Jiang
-
Publication number: 20170060753Abstract: Presented herein are methods, non-transitory computer readable media, and devices for integrating a workload management scheme for a file system buffer cache with a global recycle queue infrastructure.Type: ApplicationFiled: August 28, 2015Publication date: March 2, 2017Inventors: Peter DENZ, Matthew Curtis-Maury, Peter Wyckoff
-
Publication number: 20170060754Abstract: According to one general aspect, an apparatus may include a store stream detector configured to detect when the apparatus is streaming data to a memory system. The apparatus may also include a write generator configured to route a stream of data to either a near memory of the memory system or a far memory of the memory system based upon a cache threshold value and a size of the stream of data. The apparatus may be configured to dynamically vary the cache threshold value based upon a predetermined rule set, such that cache pollution caused by the stream of data is managed.Type: ApplicationFiled: January 13, 2016Publication date: March 2, 2017Inventors: Tarun NAKRA, Kevin LEPAK
-
Publication number: 20170060755Abstract: A system includes a memory, a cache including multiple cache lines; and a processor. The processor may be configured to retrieve, from a first cache line, a first instruction to store data in a memory location at an address in the memory. The processor may be configured to retrieve, from a second cache line, a second instruction to read the memory location at the address in the memory. The second instruction may be retrieved after the first instruction. The processor may be configured to execute the second instruction at a first time dependent upon a value of a first entry in a table, wherein the first entry is selected dependent upon a value in the second cache line. The processor may be configured to execute the first instruction at a second time, after the first time, and invalidate the second instruction at a third time, after the second time.Type: ApplicationFiled: August 28, 2015Publication date: March 2, 2017Inventor: Yuan Chou
-
Publication number: 20170060756Abstract: In at least some embodiments, a processor core generates a store operation by executing a store instruction in an instruction sequence. The store operation is marked as a high priority store operation in response to the store instruction being marked as high priority and is not so marked otherwise. The store operation is buffered in a store queue associated with a cache memory of the processor core. Handling of the store operation in the store queue is expedited in response to the store operation being marked as a high priority store operation and not expedited otherwise.Type: ApplicationFiled: August 28, 2015Publication date: March 2, 2017Inventors: GUY L. GUTHRIE, HUGH SHEN, JEFFREY A. STUECHELI, DEREK E. WILLIAMS
-
Publication number: 20170060757Abstract: In at least some embodiments, a processor core generates a store operation by executing a store instruction in an instruction sequence. The store operation is marked as a high priority store operation operation in response to detecting a barrier instruction in the instruction sequence immediately preceding the store instruction in program order and is not so marked otherwise. The store operation is buffered in a store queue associated with a cache memory of the processor core. Handling of the store operation in the store queue is expedited in response to the store operation being marked as a high priority store operation and not expedited otherwise.Type: ApplicationFiled: August 28, 2015Publication date: March 2, 2017Inventors: GUY L. GUTHRIE, HUGH SHEN, JEFFREY A. STUECHELI, DEREK E. WILLIAMS
-
Publication number: 20170060758Abstract: In at least some embodiments, a processor core generates one or more store operations by executing one or more store instructions in an instruction sequence. The one or more store operations are marked as a high priority store operations in response to detecting, in the instruction sequence, a window opening instruction and a window closing instruction bounding the one or more store instructions and are not so marked otherwise. The one or more store operations are buffered in a store queue associated with a cache memory of the processor core. Handling of the one or more store operations in the store queue is expedited in response to the one or more store operations being marked as high priority store operations and not expedited otherwise.Type: ApplicationFiled: August 28, 2015Publication date: March 2, 2017Inventors: GUY L. GUTHRIE, HUGH SHEN, JEFFREY A. STUECHELI, DEREK E. WILLIAMS
-
Publication number: 20170060759Abstract: In at least some embodiments, a processor core generates a store operation by executing a store instruction in an instruction sequence. The store operation is marked as a high priority store operation in response to the store instruction being marked as high priority and is not so marked otherwise. The store operation is buffered in a store queue associated with a cache memory of the processor core. Handling of the store operation in the store queue is expedited in response to the store operation being marked as a high priority store operation and not expedited otherwise.Type: ApplicationFiled: September 30, 2015Publication date: March 2, 2017Inventors: GUY L. GUTHRIE, HUGH SHEN, JEFFREY A. STUECHELI, DEREK E. WILLIAMS
-
Publication number: 20170060760Abstract: In at least some embodiments, a processor core generates a store operation by executing a store instruction in an instruction sequence. The store operation is marked as a high priority store operation operation in response to detecting a barrier instruction in the instruction sequence immediately preceding the store instruction in program order and is not so marked otherwise. The store operation is buffered in a store queue associated with a cache memory of the processor core. Handling of the store operation in the store queue is expedited in response to the store operation being marked as a high priority store operation and not expedited otherwise.Type: ApplicationFiled: September 30, 2015Publication date: March 2, 2017Inventors: GUY L. GUTHRIE, HUGH SHEN, JEFFREY A. STUECHELI, DEREK E. WILLIAMS
-
Publication number: 20170060761Abstract: In at least some embodiments, a processor core generates a store operation by executing a store instruction in an instruction sequence. The store operation is marked as a high priority store operation operation in response to detecting, in the instruction sequence, a barrier instruction that precedes the store instruction in program order and that includes a field set to indicate the store operation should be accorded high priority and is not so marked otherwise. The store operation is buffered in a store queue associated with a cache memory of the processor core. Handling of the store operation in the store queue is expedited in response to the store operation being marked as a high priority store operation and not expedited otherwise.Type: ApplicationFiled: September 30, 2015Publication date: March 2, 2017Inventors: GUY L. GUTHRIE, HUGH SHEN, JEFFREY A. STUECHELI, DEREK E. WILLIAMS
-
Publication number: 20170060762Abstract: In at least some embodiments, a processor core generates one or more store operations by executing one or more store instructions in an instruction sequence. The one or more store operations are marked as a high priority store operations in response to detecting, in the instruction sequence, a window opening instruction and a window closing instruction bounding the one or more store instructions and are not so marked otherwise. The one or more store operations are buffered in a store queue associated with a cache memory of the processor core. Handling of the one or more store operations in the store queue is expedited in response to the one or more store operations being marked as high priority store operations and not expedited otherwise.Type: ApplicationFiled: September 30, 2015Publication date: March 2, 2017Inventors: GUY L. GUTHRIE, HUGH SHEN, JEFFREY A. STUECHELI, DEREK E. WILLIAMS
-
Publication number: 20170060763Abstract: A storage device includes a flash memory-based cache for a hard disk-based storage device and a controller that is configured to limit the rate of cache updates through a variety of mechanisms, including determinations that the data is not likely to be read back from the storage device within a time period that justifies its storage in the cache, compressing data prior to its storage in the cache, precluding storage of sequentially-accessed data in the cache, and/or throttling storage of data to the cache within predetermined write periods and/or according to user instruction.Type: ApplicationFiled: November 10, 2016Publication date: March 2, 2017Inventors: Umesh Maheshwari, Varun Mehta
-
Publication number: 20170060764Abstract: Methods and systems are presented for evicting or copying-forward blocks in a storage system during garbage collection. In one method, a block status is maintained in a first memory to identify if the block is active or inactive, blocks being stored in segments that are configured to be cacheable in a second memory, a read-cache memory. Whenever an operation on a block is detected making the block inactive in one volume, the system determines if the block is still active in any volume, the block being cached in a first segment in the second memory. When the system detects that the first segment is being evicted from the second memory, the system re-caches the block into a second segment in the second memory if the block status of the block is active and the frequency of access to the block is above a predetermined value.Type: ApplicationFiled: February 9, 2016Publication date: March 2, 2017Inventors: Pradeep Shetty, Senthil Kumar Ramamoorthy, Umesh Maheshwari, Vanco Buca
-
Publication number: 20170060765Abstract: A computer system provides a mechanism for assuring a safe, non-preemptible access to a private data area (PRDA) belonging to a CPU. PRDA accesses generally include obtaining an address of a PRDA and performing operations on the PRDA using the obtained address. Safe, non-preemptible access to a PRDA generally ensures that a context accesses the PRDA of the CPU on which the context is executing, but not the PRDA of another CPU. While a context executes on a first CPU, the context obtains the address of the PRDA. After the context is migrated to a second CPU, the context performs one or more operations on the PRDA belonging to the second CPU using the address obtained while the context executed on the first CPU. In another embodiment, preemption and possible migration of a context from one CPU to another CPU is delayed while a context executes non-preemptible code.Type: ApplicationFiled: August 28, 2015Publication date: March 2, 2017Inventors: Cyprien LAPLACE, Harvey TUCH, Andrei WARKENTIN, Adrian DRZEWIECKI
-
Publication number: 20170060766Abstract: A method for storing service level agreement (“SLA”) compliance data includes reserving a memory location to store SLA compliance data of a software thread. The method includes directing the software thread to run on a selected hardware device. The method includes enabling SLA compliance data to be stored in the memory location. The SLA compliance data is from a hardware counting device in communication with the selected hardware device. The SLA compliance data corresponds to operation of the software thread on the selected hardware device.Type: ApplicationFiled: September 24, 2015Publication date: March 2, 2017Inventors: RAJARSHI DAS, AARON C. SAWDEY, PHILIP L. VITALE
-
Publication number: 20170060767Abstract: An I/O DMA address may be translated for a flexible number of entries in a translation validation table (TVT) for a partitionable endpoint number, when a particular entry in the TVT is accessed based on the partitionable endpoint number. A presence of an extended mode bit can be detected in a particular TVT entry. Based on the presence of the extended mode bit, an entry in the extended TVT can be accessed and used to translate the I/O DMA address to a physical address.Type: ApplicationFiled: September 29, 2015Publication date: March 2, 2017Inventors: Jesse P. Arroyo, Rama K. Hazari, Sakethan R. Kotta, Kumaraswamy Sripathy
-
Publication number: 20170060768Abstract: Techniques and systems are provided for tracking commands. Such methods and systems can include maintaining a meta page in a volatile memory to track commands. The meta page can comprise information associated with a non-volatile memory superblock. When an invalidation command is received for a first logical address, the first logical address can be stored along with an indication that the data associated with the first logical address is invalid, in a first location in the meta page.Type: ApplicationFiled: February 2, 2016Publication date: March 2, 2017Inventors: Fan Zhang, Taeil Um, Yu Cai
-
Publication number: 20170060769Abstract: Described are various embodiments of systems, devices and methods for generating locality-indicative data representations of data streams, and compressions thereof. In one such embodiment, a method is provided for determining an indication of locality of data elements in a data stream communicated over a communication medium. This method comprises determining, for at least two sample times, count values of distinct values for each of at least two distinct value counters, wherein each of the distinct value counters has a unique starting time; and comparing corresponding count values for at least two of the distinct value counters to determine an indication of locality of data elements in the data stream at one of the sample times.Type: ApplicationFiled: April 29, 2015Publication date: March 2, 2017Inventors: Jacob Taylor Wires, Andrew Warfield, Stephen Frowe Ingram, Nicholas James Alexander Harvey, Zachary Drudi
-
Publication number: 20170060770Abstract: An I/O DMA address may be translated for a flexible number of entries in a translation validation table (TVT) for a partitionable endpoint number, when a particular entry in the TVT is accessed based on the partitionable endpoint number. A presence of an extended mode bit can be detected in a particular TVT entry. Based on the presence of the extended mode bit, an entry in the extended TVT can be accessed and used to translate the I/O DMA address to a physical address.Type: ApplicationFiled: September 1, 2015Publication date: March 2, 2017Inventors: Jesse P. Arroyo, Rama K. Hazari, Sakethan R. Kotta, Kumaraswamy Sripathy
-
Publication number: 20170060771Abstract: In an example, a processing system of a database system may categorize event data taken from logged interactions of users with a multi-tenant information system to provide a metric. Event roll-up aggregate metrics used to provide the metric may be generated in connection with event capture. The processing system of the database system may periodically calculate the metric for a particular one of the tenants, and electronically store the periodically calculated metrics for accessing responsive to a query of the particular tenant.Type: ApplicationFiled: August 31, 2015Publication date: March 2, 2017Inventors: Aakash Pradeep, Adam Torman, Samarpan Jain, Alex Warshavsky
-
Publication number: 20170060772Abstract: Techniques are provided for maintaining data persistently in one format, but making that data available to a database server in more than one format. For example, one of the formats in which the data is made available for query processing is based on the on-disk format, while another of the formats in which the data is made available for query processing is independent of the on-disk format. Data that is in the format that is independent of the disk format may be maintained exclusively in volatile memory to reduce the overhead associated with keeping the data in sync with the on-disk format copies of the data. Selection of data to be maintained in the volatile memory may be based on various factors. Once selected the data may also be compressed to save space in the volatile memory. The compression level may depend on one or more factors that are evaluated for the selected data.Type: ApplicationFiled: August 31, 2015Publication date: March 2, 2017Inventors: CHINMAYI KRISHNAPPA, VINEET MARWAH, AMIT GANESH
-
Publication number: 20170060773Abstract: Embodiments of the present invention disclose a method and apparatus of cache management for a non-volatile storage device. The method embodiment includes: determining a size relationship between a capacity sum of a clean page subpool and a dirty page subpool and a cache capacity; determining, when the capacity sum is equal to the cache capacity, whether identification information of a to-be-accessed page is in a history list of clean pages or a history list of dirty pages; and when it is determined that the identification information of the to-be-accessed page is in the history list of clean pages, adding a first adjustment value to a clean page subpool capacity threshold; and when the identification information of the to-be-accessed page is in the history list of dirty pages, subtracting a second adjustment value from the clean page subpool capacity threshold.Type: ApplicationFiled: November 10, 2016Publication date: March 2, 2017Applicant: HUAWEI TECHNOLOGIES CO.,LTD.Inventor: Junhua Zhu
-
Publication number: 20170060774Abstract: A storage control device includes a first memory, a second memory, and a processor. The processor is configured to store a reference count of each of a plurality of first and second unit data. The processor is configured to arrange first entries of first management information in a first memory area on the first memory. The first entries each include a hash value and information indicating where corresponding one of the first unit data is stored. The processor is configured to arrange second entries of second management information in a second memory area on the second memory. The second entries each include a hash value, information indicating where corresponding one of the second unit data is stored, and the reference count. The processor is configured to arrange, in a third memory area on the first memory, index information for filtering hash values included in the second entries.Type: ApplicationFiled: August 23, 2016Publication date: March 2, 2017Applicant: FUJITSU LIMITEDInventors: Mikio ITO, Yuji Morita, Takako Kato, Osamu Kimura
-
Publication number: 20170060775Abstract: Methods of securely encrypting and decrypting data stored within computer readable memory of a device are described. Additionally, a memory encryption unit architecture (200) is described. A disclosed encryption method comprises the steps of: providing (122) a key; encrypting (126) the data stored in the computer readable memory using the key; generating (132) an authentication code based on parameters stored in the computer readable memory; wrapping (136) the key using the authentication code to generate a wrapped key; and storing the wrapped key in the computer readable memory (30), wherein the validity of the wrapped key is linked to the authenticity of the data stored in the computer readable memory. This prevents successful decryption in the event of execution of modified or malicious code that alters the data stored in the computer readable memory.Type: ApplicationFiled: July 21, 2015Publication date: March 2, 2017Inventor: Hugues de Perthuis
-
Publication number: 20170060776Abstract: Described herein are various technologies pertaining to delivery of token-authenticated encrypted data. Content descriptor(s) (e.g., playlist(s)) can be modified to facilitate exchange of a token for a decryption key for browser(s) that do not provide logic to manage a flow of the token.Type: ApplicationFiled: August 31, 2015Publication date: March 2, 2017Applicant: Microsoft Technology Licensing, LLCInventors: Douglas Charles Shimonek, Dawei Wei, Steven C. Peterson, Mingfei Yan, Ashish Chawla, Vishal Sood, Quintin Swayne Burns