Patents Issued in March 7, 2019
-
Publication number: 20190073267Abstract: A method includes sending, by a computing device of a distributed storage and task (DST) network, a plurality of sets of encoded data slices and index generation task information to a set of DST execution units. The method further includes receiving partial data indexes from the set of DST execution units. The method further includes generating a data index based on the partial data indexes and determining an operational task from a list of operational tasks that includes storing the plurality of sets of encoded data slices, storing the data index, long term storage of the raw data, execute a data processing function on the sets of encoded data slices and execute a data processing function on the data index. The method further includes partitioning the operational task into a set of partial tasks and sending the set of partial tasks to the set of DST execution units.Type: ApplicationFiled: November 7, 2018Publication date: March 7, 2019Inventors: Andrew D. Baptist, Greg R. Dhuse, S. Christopher Gladwin, Gary W. Grube, Wesley B. Leggette, Manish Motwani, Jason K. Resch, Thomas F. Shirley, Jr., Ilya Volvovski
-
Publication number: 20190073268Abstract: The present disclosure provides a system and method for switching firmware autonomously by a storage controller. The system and method include determining, by a switcher module of the storage controller, satisfaction of a debug condition based upon values of parameters of the debug condition. The debug condition is indicative of a problem within a storage system that includes the storage controller that facilitates communication between a host device and a non-volatile storage of the storage system. The system and method further include switching, by the switcher module, operation of the storage system from a primary firmware to a secondary firmware based upon the determination of the switcher module that the debug condition has been satisfied. The switching from the primary firmware to the secondary firmware occurs automatically without a switching request from the host device.Type: ApplicationFiled: September 6, 2017Publication date: March 7, 2019Inventors: Beniamin Kantor, Judah Gamliel Hahn, Ilya Gusev
-
Publication number: 20190073269Abstract: Techniques for identifying and remedying performance issues of Virtualized Network Functions (VNFs) are discussed. An example system includes processor(s) configured to: process VNF Performance Measurement (PM) data received from a network Element Manager (EM) for a VNF; determine whether the VNF has a negative performance issue based on the VNF PM data; request that the EM create a Virtualization Resource (VR) PM job associated with a VR of the VNF when the VNF has the negative performance issue; process VR PM data received from the EM; determine whether to restart the VNF based on the VR PM data and the VNF PM data; and request a network function virtualization orchestrator (NFVO) to restart the VNF based on a determination to restart the VR.Type: ApplicationFiled: May 18, 2016Publication date: March 7, 2019Inventors: Joey Chou, Stephen Gooch, Meghashree Dattatri Kedalagudde
-
Publication number: 20190073270Abstract: A new snapshot of a storage volume is created by instructing computing nodes to suppress write requests. Once pending write requests from the computing nodes are completed, storage nodes create a new snapshot for the storage volume by allocating a new segment to the new snapshot and finalizes and performs garbage collection with respect to segments allocated to the previous snapshot. Subsequent write requests to the storage volume are then performed on the segments allocated to the new snapshot. A segment maps segments to a particular snapshot and metadata stored in the segment indicates storage volume addresses of data written to the segment. The snapshots may be represented by a storage manager in a hierarchy that identifies an ordering of snapshots and branches to clone snapshots.Type: ApplicationFiled: September 5, 2017Publication date: March 7, 2019Inventors: Dhanashankar Venkatesan, Partha Sarathi Seetala, Gurmeet Singh
-
Publication number: 20190073271Abstract: Disclosed are various embodiments for performing a backup a device and/or performing a wipe or removal of data from a device enrolled with a device management service. In various embodiments, a wipe request is generated by a management service and transmitted to a client device. The wipe request includes commands to backup enterprise data for a particular application, verify that the management service has received the enterprise data, and remove the enterprise data from the client device. The management service determines that the enterprise data is received from the client device and transmits a confirmation that the management service has received the enterprise data. The confirmation causes the client device to remove the enterprise data from the client device.Type: ApplicationFiled: November 2, 2018Publication date: March 7, 2019Inventors: Erich Peter Stuntebeck, Jonathan Blake Brannon
-
Publication number: 20190073272Abstract: A memory-only snapshot of a file is disclosed that can be used by a system to perform a read of the file without disrupting other systems from reading or writing to the file. The in-memory snapshot structure includes a copy of allocation information of the file that points to the file data and be can be used by a backup application to read a file. A lock manager is provided that is configured to include an output parameter for file locks that includes allocation information associated with the in-memory file snapshot, thereby notifying a writer to the file that it must perform copy-on operations. A free space manager is also provided that is configured to track pinned blocks so as to both allow a write to a file to free snapshot blocks, while also preventing the freed blocks from being allocated to other files until backup read processing is completed.Type: ApplicationFiled: September 1, 2017Publication date: March 7, 2019Inventor: Scott T. Marcotte
-
Publication number: 20190073273Abstract: A data transfer software application for transmitting data is provided. The data transfer software comprises a first electronic device and a second electronic device. A software program downloaded on the first electronic device and the second electronic device. Upon launching the software program, the software program automatically, instantly, and immediately downloads data from the first electronic device to the second electronic device.Type: ApplicationFiled: September 6, 2017Publication date: March 7, 2019Inventor: Phillippe Lorson
-
Publication number: 20190073274Abstract: There is provided a non-transitory computer-readable recording medium in which a program is recorded, the program causing a computer to function as: a communication unit that receives a beacon signal from an electronic apparatus; and a processing unit that controls communication of the communication unit, in which the processing unit performs notification processing of a backup function of backup data into the electronic apparatus in a case where it is determined that the electronic apparatus from which the beacon signal is transmitted is an apparatus having a backup function and that a distance from the electronic apparatus is equal to or smaller than a predetermined threshold value based on a reception radio wave intensity of the beacon signal.Type: ApplicationFiled: September 4, 2018Publication date: March 7, 2019Inventor: Daiki YOKOYAMA
-
Publication number: 20190073275Abstract: A computer-implemented method for managing a tiered storage system having an archive tier and an active storage tier comprises determining a workload for moving data between the active tier and the archive tier; and determining an assignment of data to be stored across the active tier and the archive tier, based on the determined workload.Type: ApplicationFiled: September 7, 2017Publication date: March 7, 2019Inventors: Slavisa Sarafijanovic, Yusik Kim, Vinodh Venkatesan, Ilias Iliadis, Robert B. Basham
-
Publication number: 20190073276Abstract: An improved approach for disaster recovery is provided, along with corresponding systems, methods, and computer readable media. In the improved approach, a set of applications are assigned one or more application weightings (e.g., based upon asset type, a recovery time objective, a recovery time capability, a criticality to key business functions, vendor hosting, interfaces with other systems), etc. The one or more application weightings are utilized for ranking the applications, and the ranked set of applications is utilized to generate a disaster recovery boot sequence whereby specific recovery tasks and infrastructure device requirements are arranged temporally to achieve one or more recovery time conditions.Type: ApplicationFiled: September 6, 2017Publication date: March 7, 2019Inventors: Elton Yuen, Jacqueline Kirkland, Michael Stoyke
-
Publication number: 20190073277Abstract: A transaction recovery method in a database system and a database management system, where the database management system determines a transaction in the database system, which is not committed, and has performed an update operation on data in the database system, obtains, from an update operation log recording the update operation, a physical address of an old value of the data in the storage device, where the old value is a value of the data before the update operation, replaces a physical address of a new value of the data with the physical address of the old value, and sets the physical address of the new value to invalid such that a logical address of the data points to the physical address of the old value, where the new value is a value of the data after the update operation.Type: ApplicationFiled: November 6, 2018Publication date: March 7, 2019Inventors: Xiaofeng Meng, Dongwang Sun
-
Publication number: 20190073278Abstract: The present disclosure relates to a signal processing device, a signal processing method, and a program that enable detection of a failure without stopping signal processing. A toggle rate is calculated for each of signals of a preceding stage and a subsequent stage of a signal processing unit and, in a case where a difference therebetween is larger than a predetermined threshold value, it is assumed that an error caused by a failure is occurring in the signal processing unit. The present disclosure can be applied to a drive assistance device of a vehicle.Type: ApplicationFiled: February 10, 2017Publication date: March 7, 2019Applicant: Sony CorporationInventor: Noritaka IKEDA
-
Publication number: 20190073279Abstract: Provided are a computer program product, system, and method for managing read and write requests from a host to tracks in storage cached in a cache. A determination is made whether track format table support information for a track indicates that a track format table was previously determined to have or not have the track format code for track format metadata. Track format metadata for the track is rebuilt to determine whether the track format table includes a track format code for the rebuilt track format metadata when the track format table support information indicates that the track format table was previously determined to have a track format code for the track. The track format metadata is not rebuilt when the track format table support information indicates that the track format table was previously determined to not have a track format code for the track.Type: ApplicationFiled: September 1, 2017Publication date: March 7, 2019Inventors: Kyler A. Anderson, Kevin J. Ash, Susan K. Candelaria, Lokesh M. Gupta, Beth A. Peterson
-
Publication number: 20190073280Abstract: A plurality of tasks are executed on a plurality of central processing units (CPUs) of a computational device. In response to an occurrence of an event in the computational device, one or more CPUs that are executing tasks associated with an event category to which the event belongs are stopped within a first predetermined amount of time. In response to stopping the one or more CPUs, a data set indicative of a state of the computational device is collected, for at most a second predetermined amount of time.Type: ApplicationFiled: September 5, 2017Publication date: March 7, 2019Inventors: Matthew D. Carson, Trung N. Nguyen, Louis A. Rasor, Todd C. Sorenson
-
Publication number: 20190073281Abstract: A first system receives values with identifiers of the values from one or more clients. The first system enters the values sequentially into a first data store. The first system associates each of the values with a sequence ID indicating a position in entry sequence of the values into the first data store. The first system transmits a first identifier of a first value and a first sequence ID associated with the first value to a second system. The first system transmits the first sequence ID and the first value to the second system after transmitting the first identifier and the first sequence ID. The second system holds the first identifier and the first sequence ID transmitted from the first system in a first queue. The second system enters the first value received after the first identifier from the first system into a second data store.Type: ApplicationFiled: September 5, 2016Publication date: March 7, 2019Applicant: Hitachi, Ltd.Inventors: Arif Herusetyo WICAKSONO, Kazuhide AIKOH
-
Publication number: 20190073282Abstract: A method of operating a remote procedure call cache in a storage cluster is provided. The method includes receiving a remote procedure call at a first storage node having solid-state memory and writing information, relating to the remote procedure call, to a remote procedure call cache of the first storage node. The method includes mirroring the remote procedure call cache of the first storage node in a mirrored remote procedure call cache of a second storage node. A plurality of storage nodes and a storage cluster are also provided.Type: ApplicationFiled: October 29, 2018Publication date: March 7, 2019Applicant: Pure Storage, Inc.Inventors: John Hayes, Robert Lee, Peter Vajgel, Joshua Robinson
-
Publication number: 20190073283Abstract: A computer-implemented method, according to one embodiment, includes: splitting received information between two controllers of a system in a normal operating mode, the received information including data and metadata; storing the metadata in resilient storage in response to a first of the controllers entering a failed state; updating the first controller with information received while the first controller was in the failed state, the first controller being updated in response to the first controller being repaired; and returning the system to the normal operating mode in response to the first controller being updated. Storing the metadata in resilient storage includes: saving snapshots of the metadata in the resilient storage, and saving changes to the metadata which occur between the snapshots. The changes to the metadata are saved in a log structured array. Moreover, the two controllers store the received information in a specified system memory location.Type: ApplicationFiled: September 5, 2017Publication date: March 7, 2019Inventors: Lior Chen, Daniel Gan-Levi, Ronen Gazit, Ofer Leneman, Deborah A. Messing
-
Publication number: 20190073284Abstract: A first server of a storage controller is configured to communicate with a host via a first bus interface, and a second server of the storage controller is configured to communicate with the host via a second bus interface. Data is written from the host via the first bus interface to a cache of the first server and via the second bus interface to a non-volatile storage of the second server. The data stored in the cache of the first server is periodically compared to the data stored in the non-volatile storage of the second server.Type: ApplicationFiled: September 5, 2017Publication date: March 7, 2019Inventors: Kyler A. Anderson, Kevin J. Ash, Lokesh M. Gupta, Matthew J. Kalos
-
Publication number: 20190073285Abstract: An information processing device includes a device, a management device that is connected to the device via a first transmission route and configured to acquire information regarding the device via the first transmission route, and a processing device that is connected to the device via a second transmission route, connected to the management device via a third transmission route, and configured to initialize the device and acquire the information from the management device via the third transmission route.Type: ApplicationFiled: August 28, 2018Publication date: March 7, 2019Applicant: FUJITSU LIMITEDInventor: Akira Hayashida
-
Publication number: 20190073286Abstract: Systems and methods for computing and evaluating internet of things (IoT) readiness of a product. The traditional systems and methods provide for the IoT maturity check methodologies but they are based more upon cultural view point rather than on an accurate quantified assessment. Embodiments of the present disclosure provide for computing and evaluating the IoT readiness of the product by providing for a set of information on a plurality of IoT compatible products and target IoT integration platform and infrastructure for configuring and connecting the plurality IoT compatible products, computing and assigning a set of scores to the plurality IoT compatible products using a scoring engine module, computing revenue potential and one or more optimal methods of integrating and deploying the plurality of IoT compatible products on a comparison of the set of assigned scores, the performance potential and the potential revenue value of the plurality of IoT compatible products.Type: ApplicationFiled: August 31, 2018Publication date: March 7, 2019Applicant: Tata Consultancy Services LimitedInventors: Guruprasad Nagaraja KAMBALOOR, Chethan PRABHUDEVA
-
Publication number: 20190073287Abstract: An information processing apparatus includes a memory; and a processor coupled to the memory and configured to generate a performance model for calculating a performance value of an application program from a power restriction for each set of parameters of the application program, based on data acquired when a computing apparatus executes the application program for each set of parameters of the application program under each of a plurality of power restrictions; calculate, for each set of parameters of the application program, the performance value of the application program from a first power restriction different from any of the plurality of power restrictions, based on the performance model generated for each set of parameters of the application program; and output a set of parameters of the application program corresponding to a highest performance value of the calculated performance values.Type: ApplicationFiled: August 31, 2018Publication date: March 7, 2019Applicant: FUJITSU LIMITEDInventors: Miyuki Matsuo, Kohta Nakashima
-
Publication number: 20190073288Abstract: A performance management system includes: an information collection function unit configured to collect database utilization information indicating a utilization status of the database and component utilization information indicating a utilization status of components, which are constituent elements of the information system; a related component calculation function unit configured to acquire a component utilization, which is a proportion of an actual usage of the component to a maximum usage of the component, on the basis of the component utilization information and specify a related component related to the performance of the information system among the components on the basis of the component utilization and the database utilization information; and a prediction function unit configured to predict a future performance of the information system on the basis of utilization information of the related component.Type: ApplicationFiled: February 26, 2018Publication date: March 7, 2019Applicant: HITACHI, LTD.Inventor: Shinichi HAYASHI
-
Publication number: 20190073289Abstract: A method includes receiving data indicative of a number of times each of one or more rules was executed by a data processing application during processing of one or more records; based on the number of times each of the rules was executed by the data processing application, determining a content criterion for each of one or more particular fields; generating content for each of the particular fields based on the content criterion; and populating each of the particular fields with the generated content.Type: ApplicationFiled: November 6, 2018Publication date: March 7, 2019Inventors: Marshall A. Isman, Richard A. Epstein
-
Publication number: 20190073290Abstract: A fix defining fix defining a plurality of unique changes to a computer program can be identified. Program code units in the computer program changed by the unique changes are identified and corresponding data entries in a first data structure can be generated. A number of test cases available to test the program code units in the computer program changed by the unique changes can be determined by matching each of the program code units to corresponding data entries contained in a second data structure that correlates program code units to test cases. A test readiness index indicating a readiness of the fix to be tested can be automatically generated. The test readiness index can be based on a number of unique changes to the computer program defined by the fix and the number of test cases available to test the unique changes. The test readiness index can be output.Type: ApplicationFiled: October 27, 2018Publication date: March 7, 2019Inventors: Adam C. Geheb, Prasanna R. Joshi, Apurva S. Patel
-
Publication number: 20190073291Abstract: Generating instructions, in particular for mailbox verification in a simulation environment. A sequence of instructions is received, as well as selection data representative of a plurality of commands including a special command. Repeatedly selecting one of the plurality of commands and outputting an instruction based on the selected command. The outputting of an instruction includes outputting a next instruction in the sequence of instructions if the selected command is the special command, and outputting an instruction associated with the command if the selected command is not the special command.Type: ApplicationFiled: November 6, 2018Publication date: March 7, 2019Inventors: Joerg DEUTSCHLE, Ursel HAHN, Joerg WALTER, Ernst-Dieter WEISSENBERGER
-
Publication number: 20190073292Abstract: Examples of the present disclosure relate to a device, method, and medium storing instructions for execution by a processor for testing state machine software. For example, a state machine software tester device, including a processor, a storage resource with instructions that, when executed on the processor, cause the processor to record a state transition. In an example of the device, the recording can be of a state transition of the component under test, where the state transition is recorded in the log file. In an example, the processor of the device can compare the log file to a number of stored state machine testing permutations. In an example device, the instructions may direct a processor to generate a component testing report for a state transition based on the compared log file and the stored state machine testing permutations.Type: ApplicationFiled: September 4, 2018Publication date: March 7, 2019Inventors: KATHIRVEL KUPPUSAMY, RAVIKUMAR RAMAN, JOHN MORRIS, SHREE JAISIMHA
-
Publication number: 20190073293Abstract: The invention provides a system and method for automated software testing based on Machine Learning (ML). The system automatically picks up results of the software test automation reports from software test automation framework. The report parser parses the failures from the report. A ML engine compares them with the failures that are known or present in the NoSQL database. After the creation of bug ticket in the defect-tracking tool, an automated notification system notifies the stakeholders via email or instant messaging about the status of the respective ticket. A feedback to the system by software test engineer helps to make the system learn or adjust the decision making to be more precise.Type: ApplicationFiled: September 6, 2017Publication date: March 7, 2019Inventors: Mayank Mohan Sharma, Sudhanshu Gaur
-
Publication number: 20190073294Abstract: A package module may be provided. The package module may include a first chip and a second chip. The first chip may be configured to receive first pattern data to generate first transmission data in a first write mode. The second chip may be configured to receive the first transmission data to generate and output first sense data in a first read mode.Type: ApplicationFiled: January 5, 2018Publication date: March 7, 2019Applicant: SK hynix Inc.Inventors: Hak Song KIM, Min Su PARK
-
Publication number: 20190073295Abstract: A memory system includes: a memory device that includes a plurality of memory blocks each of which includes a plurality of pages for storing data; and a controller that includes a first memory, wherein the controller performs a foreground operation and a background operation onto the memory blocks, checks priorities and weights for the foreground operation and the background operation, schedules queues corresponding to the foreground operation and the background operation based on the priorities and the weights, allocates regions corresponding to the scheduled queues to the first memory, and performs the foreground operation and the background operation through the regions allocated to the first memory.Type: ApplicationFiled: March 5, 2018Publication date: March 7, 2019Inventor: Jong-Min LEE
-
Publication number: 20190073296Abstract: Data is stored on a non-volatile storage media in a sequential, log-based format. The formatted data defines an ordered sequence of storage operations performed on the non-volatile storage media. A storage layer maintains volatile metadata, which may include a forward index associating logical identifiers with respective physical storage units on the non-volatile storage media. The volatile metadata may be reconstructed from the ordered sequence of storage operations. Persistent notes may be used to maintain consistency between the volatile metadata and the contents of the non-volatile storage media. Persistent notes may identify data that does not need to be retained on the non-volatile storage media and/or is no longer valid.Type: ApplicationFiled: November 1, 2018Publication date: March 7, 2019Inventors: David Atkisson, David Nellans, David Flynn, Jens Axboe, Michael Zappe
-
Publication number: 20190073297Abstract: A method operable with the storage device includes determining a workload to the storage device based on host Input/Output (I/O) requests to the storage device. When the workload is above a threshold, a first portion of the storage device is selected for garbage collection based on the I/O requests. Otherwise, when the workload is below the threshold, a second different portion of the storage device is selected for garbage collection based on a storage ability of the second portion of the storage device.Type: ApplicationFiled: September 6, 2017Publication date: March 7, 2019Inventors: Ryan James Goss, Siddhartha K. Panda, Daniel J. Benjamin, Ryan C. Weidemann
-
Publication number: 20190073298Abstract: A memory management method, and a memory control circuit unit and a memory storage apparatus using the same are provided. The method includes associating physical erasing units with a data area or a spare area, configuring a plurality of logical addresses for mapping to the physical erasing units, and obtaining a garbage collection threshold based on a plurality of valid logical addresses among the logical addresses, and the physical erasing units mapping to the valid logical addresses are associated with the data area. The method further includes performing a garbage collection operation on the data area if the number of the physical erasing units associated with the data area is no less than the garbage collection threshold.Type: ApplicationFiled: October 31, 2017Publication date: March 7, 2019Applicant: PHISON ELECTRONICS CORP.Inventors: Sheng-Han Wang, Hoe-Mang Mark
-
Publication number: 20190073299Abstract: An electronic device may include an information signal storage circuit and a write data selection circuit. The information signal storage circuit may be configured to store an information signal during a mode register set operation, and may be configured to output the stored information signal as a mode register information signal. The write data selection circuit may be configured to receive the mode register information and output the mode register information signal as write data.Type: ApplicationFiled: February 9, 2018Publication date: March 7, 2019Applicant: SK hynix Inc.Inventors: Chang Hyun KIM, Dong Kyun KIM
-
Publication number: 20190073300Abstract: A nested wrap-around technology includes an address counter and associated logic for generating addresses to perform a nested wrap-around access operation. The nested wrap-around access operation may be a read or a write operation. A wrap-around section length and a wrap-around count define a wrap-around block. A wrap starting address, initially set to a supplied start address, is offset from a lower boundary of a wrap-around section. Access starts at a wrap starting address and proceeds in a wrap-around manner within a wrap-around section. After access of the address immediately preceding the wrap starting address, the wrap starting address is incremented by the wrap-around section length, or, if the wrap-around section is the last one in the wrap-around block, the wrap starting address is set to the lower boundary of the wrap-around block plus the offset. Access continues until a termination event.Type: ApplicationFiled: November 5, 2018Publication date: March 7, 2019Applicant: MACRONIX INTERNATIONAL CO., LTD.Inventors: Kuen-Long Chang, Ken-Hui Chen, Chin-Hung Chang
-
Publication number: 20190073301Abstract: A cache hit is generated, in response to receiving an input/output (I/O) command over a bus interface. An update for a metadata track is stored in a buffer associated with a central processing unit (CPU) that processes the I/O command, in response to generating the cache hit. The metadata track is asynchronously updated from the buffer with the stored update for the metadata track in the buffer.Type: ApplicationFiled: September 5, 2017Publication date: March 7, 2019Inventors: Kyler A, Anderson, Kevin J. Ash, Lokesh M. Gupta, Matthew J. Kalos
-
Publication number: 20190073302Abstract: An embodiment of a semiconductor apparatus may include technology to create two or more logical to physical translation maps for a persistent storage media, associate a respective group with each of the two or more logical to physical translation maps, assign priority information to the respective groups, and initialize the respective groups at boot time based on the assigned priority information. Another embodiment may further include technology to mark an initialized group as ready for commands. Other embodiments are disclosed and claimed.Type: ApplicationFiled: November 6, 2018Publication date: March 7, 2019Applicant: Intel CorporationInventors: Michael S. Allison, Jonathan Hughes
-
Publication number: 20190073303Abstract: Resource management techniques, such as cache optimization, are employed to organize resources within caches such that the most requested content (e.g., the most popular content) is more readily available. A service provider utilizes content expiration data as indicative of resource popularity. As resources are requested, the resources propagate through a cache server hierarchy associated with the service provider. More frequently requested resources are maintained at edge cache servers based on shorter expiration data that is reset with each repeated request. Less frequently requested resources are maintained at higher levels of a cache server hierarchy based on longer expiration data associated with cache servers higher on the hierarchy.Type: ApplicationFiled: November 2, 2018Publication date: March 7, 2019Inventors: Bradley Eugene Marshall, Swaminathan Sivasubramanian, David R. Richardson
-
Publication number: 20190073304Abstract: A system and method of a snoop filter providing larger address space coverage, freeing back-invalidation when an entry is evicted, and freeing excessive snoops when a snoop has a miss is provided. The snoop filter tracks the addresses of upper level cache lines at region basis, which enables a relatively smaller snoop filter with much larger address space coverage. The snoop filter is non-inclusive. The snoop filter is designed such that each upper level cache has its own bloom filter to track address space occupancy, eliminating a significant portion of conflict misses. The snoop filter is designed at a larger granularity such that applications have a much better spatial locality. The larger granularity employs coarse grain tracking techniques, which allow monitor of large regions of memory and use that infoiivation to avoid unnecessary broadcasts and filter unnecessary cache tag lookups, thus improving system performance and power consumption.Type: ApplicationFiled: September 7, 2017Publication date: March 7, 2019Inventor: Xiaowei JIANG
-
Publication number: 20190073305Abstract: Various aspects include methods for implementing reuse aware cache line insertion and victim selection in large cache memory on a computing device. Various aspects may include receiving a cache access request for a cache line in a higher level cache memory, updating a cache line reuse counter datum configured to indicate a number of accesses to the cache line in the higher level cache memory during a reuse tracking period in response to receiving the cache access request, evicting the cache line from the higher level cache memory, determining a cache line locality classification for the evicted cache line based on the cache line reuse counter datum, inserting the evicted cache line into a last level cache memory, and updating a cache line locality classification datum for the inserted cache line.Type: ApplicationFiled: September 5, 2017Publication date: March 7, 2019Inventors: Farrukh Hijaz, George Patsilaras
-
Publication number: 20190073306Abstract: In one embodiment, a method is operable in an over-provisioned storage device comprising a cache region and a main storage region. The method includes compressing incoming data, generating a compression parameter for the compressed data, and storing at least a portion of the compressed data in chunks in the main storage region of the storage device. The method also includes predicting when to store other chunks of the compressed data in the cache region based on the compression parameter.Type: ApplicationFiled: September 1, 2017Publication date: March 7, 2019Inventor: Andrew M. Kowles
-
Publication number: 20190073307Abstract: Requests to access specific ones of a plurality of stored objects are processed by multiple access nodes. A separate access history is maintained for each access node. Each access history identifies stored objects most recently accessed through the specific node. A separate predicted access future is maintained for each stored object. A predicted access future associated with a specific stored object can be in the form of a listing of stored objects statistically predicted to be those most likely to be accessed within a given temporal proximity after the specific stored object is accessed. Each predicted access future is determined based on inversion of maintained access histories. Responsive to receiving an access request for a specific stored object, the predicted future associated with the requested object is read, a specific number of additional stored objects identified in the associated predicted future is pre-fetched from slower to faster storage.Type: ApplicationFiled: September 6, 2017Publication date: March 7, 2019Inventors: Pieter Audenaert, Arne Vansteenkiste, Wim Michel Marcel De Wispelaere
-
Publication number: 20190073308Abstract: A method of prefetching attribute data from storage for a graphics processing pipeline comprising a cache, and at least one buffer to which data is prefetched from the storage and from which data is made available for storing in the cache. The method comprises retrieving first attribute data from the storage, the first attribute data representative of a first attribute of a first vertex of a plurality of vertices of at least one graphics primitive, identifying the first vertex, and, in response to the identifying, performing a prefetch process. The prefetch process comprises prefetching second attribute data from the storage, the second attribute data representative of a second attribute of the first vertex, the second attribute being different from the first attribute, and storing the second attribute data in a buffer of the at least one buffer.Type: ApplicationFiled: August 31, 2018Publication date: March 7, 2019Inventor: Simon Alex CHARLES
-
Publication number: 20190073309Abstract: Modifying prefetch request processing. A prefetch request is received by a local computer from a remote computer. The local computer responds to a determination that execution of the prefetch request is predicted to cause an address conflict during an execution of a transaction of the local processor by comparing a priority of the prefetch request with a priority of the transaction. Based on a result of the comparison, the local computer modifies program instructions that govern execution of the program instructions included in the prefetch request to include program instruction to perform one or both of: (i) a quiesce of the prefetch request prior to execution of the prefetch request, and (ii) a delay in execution of the prefetch request for a predetermined delay period.Type: ApplicationFiled: October 30, 2018Publication date: March 7, 2019Inventors: Michael Karl Gschwind, Valentina Salapura, Chung-Lung K. Shum
-
Publication number: 20190073310Abstract: Provided are a computer program product, system, and method for managing access requests from a host to tracks in storage. A cursor is set to point to a track in a range of tracks established for sequential accesses. Cache resources are accessed for the cache for tracks in the range of tracks in advance of processing access requests to the range of tracks. Indication is received of a subset of tracks in the range of tracks for subsequent access transactions and a determination is made whether the cursor points to a track in the subset of tracks. The cursor is set to point to a track in the subset of tracks and cache resources are accessed for tracks in the subset of tracks for anticipation of access transactions to tracks in the subset of tracks.Type: ApplicationFiled: September 6, 2017Publication date: March 7, 2019Inventors: Ronald E. Bretschneider, Susan K. Candelaria, Beth A. Peterson, Dale F. Riedy, Peter G. Sutton, Harry M. Yudenfriend
-
Publication number: 20190073311Abstract: In one embodiment, a safe data commit process manages the allocation of task control blocks (TCBs) as a function of the type of task control block (TCB) to be allocated for destaging and as a function of the identity of the RAID storage rank to which the data is being destaged. For example, the allocation of background TCBs is prioritized over the allocation of foreground TCBs for destage operations. In addition, the number of background TCBs allocated to any one RAID storage rank is limited. Once the limit of background TCBs for a particular RAID storage rank is reached, the distributed safe data commit logic switches to allocating foreground TCBs. Further, the number of foreground TCBs allocated to any one RAID storage rank is also limited. Other features and aspects may be realized, depending upon the particular application.Type: ApplicationFiled: September 5, 2017Publication date: March 7, 2019Inventors: Kyler A. Anderson, Kevin J. Ash, Matthew G. Borlick, Lokesh M. Gupta
-
Publication number: 20190073312Abstract: A processing system includes a cache, a host memory, a CPU and a hardware accelerator. The CPU accesses the cache and the host memory and generates at least one instruction. The hardware accelerator operates in a non-temporal access mode or a temporal access mode according to the access behavior of the instruction. The hardware accelerator accesses the host memory through an accelerator interface when the hardware accelerator operates in the non-temporal access mode, and accesses the cache through the accelerator interface when the hardware accelerator operates in the temporal access mode.Type: ApplicationFiled: October 30, 2017Publication date: March 7, 2019Inventors: Di HU, Zongpu QI, Wei ZHAO, Jin YU, Lei MENG
-
Publication number: 20190073313Abstract: The presently disclosed subject matter includes various inventive aspects, which are directed to direct read access of a host computer device to a share storage space in a data storage system, as well as control of the direct read of the host computer device by a control computer device in the data storage system.Type: ApplicationFiled: September 5, 2018Publication date: March 7, 2019Applicant: Kaminario Technologies Ltd.Inventors: Eyal Gordon, Ilan Steinberg, Eli Malul, Shahar Salzman, Gilad Hitron, Eran Mann
-
Publication number: 20190073314Abstract: Methods, systems, and apparatus for determining whether an access bit is set for each page table entry of a page table based on a scan of the page table with at least one page table walker, the access bit indicating whether a page associated with the page table entry was accessed in a last scan period; incrementing a count for each page in response to determining that the access bit is set for the page table entry associated with the page; resetting the access bit after determining whether the access bit is set for each page table entry; receiving a request to access, from a main memory, a first page of data; initiating a page fault based on determining that the first page of data is not stored in the main memory; and servicing the page fault with a DMA engine.Type: ApplicationFiled: November 7, 2018Publication date: March 7, 2019Inventors: Joel Dylan Coburn, Albert Borchers, Christopher Lyle Johnson, Robert S. Sprinkle
-
Publication number: 20190073315Abstract: A translation lookaside buffer (TLB) management method and a multi-core processor are provided. the method includes: receiving, by a first core, a first address translation request; querying a TLB of the first core based on the first address translation request; determining that a first target TLB entry corresponding to the first address translation request is missing in the TLB of the first core, obtaining the first target TLB entry; determining that entry storage in the TLB of the first core is full; determining a second core from cores in an idle state in the multi-core processor; replacing a first entry in the TLB of the first core with the first target TLB entry; storing the first entry in a TLB of the second core. Thereby reducing a TLB miss rate and accelerating program execution.Type: ApplicationFiled: November 2, 2018Publication date: March 7, 2019Inventors: Lei FANG, Weiguang CAI, Xiongli GU
-
Publication number: 20190073316Abstract: Data is dynamically shared from a first process to a second process by creating a shared memory segment, obtaining a file descriptor referencing the shared memory segment, and mapping the shared memory segment in an address space of a first process. The file descriptor is sent to a second process. Responsive to receiving the file descriptor, the shared memory segment is mapped in an address space of the second process. Via the shared memory segment, data from the first process is shared to the second process.Type: ApplicationFiled: September 5, 2017Publication date: March 7, 2019Inventors: Igor Sysoev, Valentin Bartenev, Nikolay Shadrin, Maxim Romanov