Patents Issued in April 30, 2020
-
Publication number: 20200133818Abstract: An information processor includes an operation history obtaining unit configured to obtain operation histories created user operations at a terminal device; a function identifying unit configured to, based on the obtained operation histories, identify a function performed by the user operations as an operation target function; an operation extracting unit configured to, based on information about the operation target function identified by the function identifying unit, extract predetermined operation histories from the obtained operation histories; an index calculating unit configured to calculate an index which indicates a level of efficiency of the operations for the operation histories extracted by the operation extracting unit; an operation selecting unit configured to, based on the index, select the operation histories having a predetermined efficiency; and an output controller configured to output a guide information based on the operation histories selected by the operation selecting unit.Type: ApplicationFiled: October 28, 2019Publication date: April 30, 2020Inventors: Kenji Katami, Ichiro Shishido, Reiichi Mannen
-
Publication number: 20200133819Abstract: A command map GUI that illustrates command usage patterns of a first entity and/or a comparison between the first entity and a second entity. A server receives and stores command usage data from a plurality of clients, each client executing a software application that enables a set of commands. A client displays the GUI based on command usage data received from the server. The GUI displays a circle chart comprising a plurality of segments representing various command categories, each segment including a wedge that represents the amount of usage of the corresponding command category. The GUI also displays a plurality of data points, each data point representing a command, wherein the distance from the center of the circle chart represents the amount of usage of the corresponding command. The GUI may display a certification and/or an unused command recommended for a user based on command usage patterns of the user.Type: ApplicationFiled: March 25, 2019Publication date: April 30, 2020Inventors: Tovi GROSSMAN, Alexandra R. BERGIN, Benjamin LAFRENIERE, Michael STURTZ, Jaime A. PERKINS, Adam L. MENTER, Howard R. SWEARER, George FITZMAURICE, Justin Frank MATEJKA, Ji In SHIN, William SABRAM, Michael L. MCMANUS
-
Publication number: 20200133820Abstract: A machine learning module is trained by receiving inputs comprising attributes of a computing environment, where the attributes affect a likelihood of failure in the computing environment. In response to an event occurring in the computing environment, a risk score that indicates a predicted likelihood of failure in the computing environment is generated via forward propagation through a plurality of layers of the machine learning module. A margin of error is calculated based on comparing the generated risk score to an expected risk score, where the expected risk score indicates an expected likelihood of failure in the computing environment corresponding to the event. An adjustment is made of weights of links that interconnect nodes of the plurality of layers via back propagation to reduce the margin of error, to improve the predicted likelihood of failure in the computing environment.Type: ApplicationFiled: October 26, 2018Publication date: April 30, 2020Inventors: James E. Olson, Micah Robison, Matthew G. Borlick, Lokesh M. Gupta, Richard P. Oubre, JR., Usman Ahmed, Richard H. Hopkins
-
Publication number: 20200133821Abstract: Techniques and structures to facilitate management of log event messages, including transmitting one or more messages, each having a unique identifier (ID), to a computing device, generating a comparison checksum for each of the one or more messages, wherein each comparison checksum is associated with a unique ID corresponding to a message from which the comparison checksum was generated, performing an encryption on each comparison checksum and associated unique ID to generate encryption data and transmitting the encryption data to the computing device.Type: ApplicationFiled: October 31, 2018Publication date: April 30, 2020Inventors: PRASHANT AGRAWAL, MAYANK SINGHI, ADRIAN HAINS, DIPTI PATIL, SUBRAMANYA SURESH, AJAY THARGAN, SRINI KANDALA
-
Publication number: 20200133822Abstract: A system and method for analyzing big data activities are disclosed. According to one embodiment, a system comprises a distributed file system for the entities and applications, wherein the applications include one or more of script applications, structured query language (SQL) applications, Not Only (NO) SQL applications, stream applications, search applications, and in-memory applications. The system further comprises a data processing platform that gathers, analyzes, and stores data relating to entities and applications. The data processing platform includes an application manager having one or more of a MapReduce Manage, a script applications manager, a structured query language (SQL) applications manager, a Not Only (NO) SQL applications manager, a stream applications manager, a search applications manager, and an in-memory applications manager. The application manager identifies if the applications are one or more of slow-running, failed, killed, unpredictable, and malfunctioning.Type: ApplicationFiled: December 30, 2019Publication date: April 30, 2020Inventors: Shivnath Babu, Kunal Agarwal
-
Publication number: 20200133823Abstract: According to at least one embodiment, a known defect record, which includes a unique identifier for the known defect and one or more log files associated with the known defect, is accessed. A log file of the known defect record is accessed, and an error message is identified. The error message indicates one or more classes involved in producing the error and one or more methods associated with each class. A pattern object is generated based on the error message and the classes and methods indicated in the error message, and a known defect graph is updated based on the pattern object. The known defect graph includes vertices representing pattern objects for error messages identified in known defects of the defect management system, and edges representing relationships between the pattern objects, with each vertex indicating one or more defect identifiers for known defect records associated with the pattern object.Type: ApplicationFiled: October 24, 2018Publication date: April 30, 2020Applicant: CA, Inc.Inventors: Preetdeep Kumar, Subhadip Ghosh
-
Publication number: 20200133824Abstract: A computing system may include a client computing device configured to execute a software application with an associated GUI. The GUI includes fields, and each field is to hold a text string. The computing system may include a GUI testing device in communication with the client computing device and configured to execute a testing framework for interacting with the software application to generate versions of the GUI, each of the versions being in a different language, and defining expected text strings in the fields. The GUI testing device may be configured to extract the fields from the versions of the GUI, perform OCR processing on the fields to generate actual text strings, and compare the actual text strings with the expected text strings.Type: ApplicationFiled: October 26, 2018Publication date: April 30, 2020Inventors: YANG WANG, WEI LUO
-
Publication number: 20200133825Abstract: A graphical programming debugging system and method are provided. The system, for example, may include, but is not limited to a graphical programming debugger comprising a processor and a memory, the processor configured to receive a selection of one of a plurality of interconnected nodes of an application, each of the plurality of interconnected nodes associated with a screen displayed to a user executing the application, execute the selected node, capture a log of the execution of the selected node, the log including a history of any variables used while the selected node was executed, a history of any processes which occurred during the execution of the selected node, and any errors encountered while executing the selected node, and simultaneously display the screen associated with the node as executed with the captured log.Type: ApplicationFiled: October 29, 2018Publication date: April 30, 2020Inventors: William Charles Eidson, Jason Teller
-
Publication number: 20200133826Abstract: The present invention generally relates to system test, and more specifically, related to online system test. In an aspect, a computer-implemented method for online system test is provided. In this method, a test rule for testing the online system is obtained. And a test result from a real user action of the online system based on the test rule will be retrieved. And a test report is generated at least based on the test result from the real user action.Type: ApplicationFiled: October 30, 2018Publication date: April 30, 2020Inventors: Lin Cai, Yi Ming Yin, Di Ling Chen, Li Wu, Xue Gang Ding
-
Publication number: 20200133827Abstract: Various embodiments are generally directed to techniques of creating or managing one or more virtual services using at least one application programming interface (API). At a plugin layer, a plugin integrator programmatically interfaces with and integrates one or more virtualization tools. The plugin integrator may be programmatically interfaced with the at least one API. At least one proxy agent may be used to run or consumer the one or more virtual services. The at least one API and the at least one proxy agent may be implemented in an abstraction layer.Type: ApplicationFiled: September 9, 2019Publication date: April 30, 2020Applicant: Capital One Services, LLCInventors: Stephen TKAC, Agnibrata NAYAK, Pradosh SIVADOSS, Govind PANDE
-
Conducting Automated Software Testing Using Centralized Controller and Distributed Test Host Servers
Publication number: 20200133828Abstract: Aspects of the disclosure relate to conducting automated software testing using a centralized controller and one or more distributed test host servers. A computing platform may receive a test execution request. Subsequently, the computing platform may retrieve test specification details information and may identify one or more tests to execute. Then, the computing platform may generate one or more remote test execution commands directing a test host server farm to execute the one or more tests. In addition, generating the one or more remote test execution commands may include constructing one or more command line instructions to be executed by the test host server farm and inserting the one or more command line instructions into the one or more remote test execution commands. Thereafter, the computing platform may send the one or more remote test execution commands to the test host server farm.Type: ApplicationFiled: November 25, 2019Publication date: April 30, 2020Inventor: Gedaliah Friedenberg -
Publication number: 20200133829Abstract: A method includes executing a web-based application within a first browser, executing and displaying a second browser inside of the web-based application, receiving, via the second browser, data indicative of one or more inputs including a browser session and recording and storing the one or more inputs on a computer readable medium.Type: ApplicationFiled: October 25, 2019Publication date: April 30, 2020Inventors: Pedro Abraham Nevado Zazo, Anand R. Sundaram, Sapna Natarajan
-
Publication number: 20200133830Abstract: In some examples, a server may retrieve and parse test results associated with testing a software package. The server may determine a weighted sum of a software feature index associated with a quality of the plurality of features, a defect index associated with the defects identified by the test cases, a test coverage index indicating a pass rate of the plurality of test cases, a release release reliability index associated with results of executing regression test cases included in the test cases, and an operational quality index associated with resources and an environment associated with the software package. The server may use a machine learning algorithm, such as a time series forecasting algorithm, to forecast a release status of the software package. The server may determine, based on the release status, whether the software package is to progress from a current phase to a next phase of a development cycle.Type: ApplicationFiled: October 31, 2018Publication date: April 30, 2020Inventors: Rohan Sharma, Sathish Kumar Bikumala, Sibi Philip, Kiran Kumar Gadamshetty
-
Publication number: 20200133831Abstract: A system, apparatus and method for utilizing a transpose function to generate a two-dimensional array from three-dimensional input data. The use of the transpose function reduces redundant elements in the resultant two-dimensional array thereby increasing efficiency and decreasing power consumption.Type: ApplicationFiled: October 24, 2018Publication date: April 30, 2020Applicant: Arm LimitedInventor: Paul Nicholas WHATMOUGH
-
Publication number: 20200133832Abstract: The present invention provides a data processing method for a memory and a related data processor for performing the method. A page of data may be divided into multiple groups. In each group, the number of “1”s and the number of “0”s are determined, so as to determine whether to reverse or keep the bit data in the group. The encoding scheme may make the bit value “0” more concentrated on the middle states of the state distribution than the bit value “1”. The data processor thereby reverses the bit data in a group if the number of “1”s is greater than the number of “0”s in the group, and keeps the bit data in a group if the number of “1”s is less than the number of “0”s in the group. As a result, the occurrence probability of the states at two sides becomes lower and the occurrence probability of the middle states becomes higher. This improves data retention of the memory.Type: ApplicationFiled: November 21, 2018Publication date: April 30, 2020Inventors: Qikang Xu, Xiang Fu, Zongliang Huo
-
Publication number: 20200133833Abstract: Exemplary methods, apparatuses, and systems include receiving an instruction to atomically write data to a memory component. A plurality of write commands for the first data are generated, including an end of atom indicator. The first plurality of write commands are sent to the memory component while accumulating a plurality of translation table updates corresponding to the write commands One or more translation tables are updated with the plurality of translation table updates in response to determining that the final write command has been successfully sent to the memory component.Type: ApplicationFiled: October 25, 2018Publication date: April 30, 2020Inventors: Nathan Jared Hughes, Karl D. Schuh, Tom Geukens
-
Publication number: 20200133834Abstract: A total estimated occupancy value of a first data on a first data block of a plurality of data blocks is determined. To determine the total estimated occupancy value of the first data block, a total block power-on-time (POT) value of the first data block is determined. Then, a scaling factor is applied to the total block POT value to determine the total estimated occupancy value of the first data block. Whether the total estimated occupancy value of the first data block satisfies a threshold criterion is determined. Responsive to determining that the total estimated occupancy value of the first data block satisfies the threshold criterion, data stored at the first data block is relocated to a second data block of the plurality of data blocks.Type: ApplicationFiled: October 30, 2018Publication date: April 30, 2020Inventors: Kishore Kumar MUCHHERLA, Renato C. PADILLA, Sampath K. RATNAM, Saeed SHARIFI TEHRANI, Peter FEELEY, Kevin R. BRANDT
-
Publication number: 20200133835Abstract: A data storing method, a memory controlling circuit unit and a memory storage device are provided. The method includes: receiving a first data; determining whether a wear degree value of a rewritable non-volatile memory module is less than a threshold; if the wear degree value of the rewritable non-volatile memory module is less than the threshold, storing the first data into the rewritable non-volatile memory module by using a first mode; and if the wear degree value of the rewritable non-volatile memory module is not less than the threshold, storing the first data into the rewritable non-volatile memory module by using a second mode. A reliability of the first data stored by using the first mode is higher than a reliability of the first data stored by using the second mode.Type: ApplicationFiled: December 5, 2018Publication date: April 30, 2020Applicant: PHISON ELECTRONICS CORP.Inventor: Chih-Kang Yeh
-
Publication number: 20200133836Abstract: A storage system includes: a memory for caching of data according to input and output to a storage device; and a CPU connected to the memory. The memory includes: a DRAM high in access performance; and an SCM identical in a unit of access to the DRAM, the SCM being lower in access performance than the DRAM. The CPU determines whether to perform caching to the DRAM or the SCM, based on the data according to input and output to the storage device, and caches the data into the DRAM or the SCM, based on the determination.Type: ApplicationFiled: August 8, 2019Publication date: April 30, 2020Inventors: Nagamasa MIZUSHIMA, Sadahiro SUGIMOTO, Kentaro SHIMADA
-
Publication number: 20200133837Abstract: A memory controller for preventing the storage area of a flash memory being reduced is provided. The memory controller controlling access to a flash memory based on a command provided from a host system, the memory controller includes: a processor, a RAM (random access memory), and a mask ROM (read only memory) in which a first firmware is written, wherein the memory controller is configured to: perform a search for a second firmware written in the flash memory based on the first firmware at a start-up time; and write a third firmware provided from the host system in the RAM when the second firmware is not found through the search and perform an initialization based on the third firmware written in the RAM.Type: ApplicationFiled: September 3, 2019Publication date: April 30, 2020Applicant: TDK CORPORATIONInventors: Naoki MUKAIDA, Kenichi TAKUBO
-
Publication number: 20200133838Abstract: A data storage device includes a memory device and a controller. The memory device includes a plurality of planes, wherein each of the planes includes two or more memory blocks. The controller is configured to control an operation of the memory device. The controller is further configured to generate a first super block as a super block including two or more way-interleavable memory blocks among the plurality of memory blocks of the plurality of planes, determine whether each of the memory blocks included in the first super block is a bad block, retrieve a spare block for replacing a first memory block determined as a bad block, in the plurality of planes; and generate a second replacing super block as a super block in which the first memory block is replaced with a second memory block positioned in a plane which does not have the first memory block, when all spare blocks of a plane including the first memory block are used.Type: ApplicationFiled: September 30, 2019Publication date: April 30, 2020Inventor: Jeen PARK
-
Publication number: 20200133839Abstract: A method and apparatus to reduce read latency and improve read quality of service (Read QoS) for non-volatile memory, such as NAND array in a NAND device. For read commands that collide with an in-progress program array operation targeting the same program locations in a NAND array, the in-progress program is suspended and the controller allows the read command to read from the internal NAND buffer instead of waiting for the in-progress program to complete. For read commands queued during an in-progress program that is processing pre-reads in preparation for a program array operation, pre-read bypass allows the reads to be serviced between the pre-reads and before the program's array operation starts. In this manner, read commands can be serviced without suspending the in-progress program. Allowing internal NAND buffer reads and enabling pre-read bypass reduces read latency and improves Read QoS.Type: ApplicationFiled: December 24, 2019Publication date: April 30, 2020Inventors: Sagar S. SIDHPURA, Yogesh B. WAKCHAURE, Aliasgar S. MADRASWALA, Fei XUE
-
Publication number: 20200133840Abstract: A method for adjusting over provisioning space and a flash device are provided. The flash device includes user storage space for storing user data and over provisioning space for garbage collection within the flash device. The flash device receives an operation instruction, and then performs an operation on user data stored in the user storage space based on the operation instruction. Further, the flash device identifies a changed size of user data after performing the operation. Based on the changed size of data, a target adjustment parameter is identified. Further, the flash device adjusts the capacity of the over provisioning space according to the target adjustment parameter. According to the method, the over provisioning ratio can be dynamically adjusted, thereby, a life of the flash device can be prolonged.Type: ApplicationFiled: December 25, 2019Publication date: April 30, 2020Applicant: HUAWEI TECHNOLOGIES CO.,LTD.Inventors: Jianhua Zhou, Po Zhang
-
Publication number: 20200133841Abstract: A method of scalable garbage collection includes receiving an indication to perform a garbage collection process on a section of a database of a storage array comprising a plurality of storage devices. The method further includes determining, by a processing device of a storage array controller of the storage array, whether the section corresponds to any check-pointed data set. The method further includes, if the section does not correspond to any check-pointed data set: performing the garbage collection process on the section. The method further includes, if the section does correspond to a check-pointed data set: performing, by the processing device, a scalable garbage collection process on the section.Type: ApplicationFiled: October 25, 2018Publication date: April 30, 2020Inventors: Brandon Davis, Wentian Cui, Matthew Paul Fay
-
Publication number: 20200133842Abstract: Techniques for efficiently purging non-active blocks in an NVM region of an NVM device using virtblocks are provided. In one set of embodiments, a host system can maintain, in the NVM device, a pointer entry (i.e., virtblock entry) for each allocated data block of the NVM region, where page table entries of the NVM region that refer to the allocated data block include pointers to the pointer entry, and where the pointer entry includes a pointer to the allocated data block. The host system can further determine that a subset of the allocated data blocks of the NVM region are non-active blocks and can purge the non-active blocks from the NVM device to a mass storage device, where the purging comprises updating the pointer entry for each non-active block to point to a storage location of the non-active block on the mass storage device.Type: ApplicationFiled: October 29, 2018Publication date: April 30, 2020Inventors: Xavier Deguillard, Ishan Banerjee, Julien Freche, Kiran Tati, Preeti Agarwal, Rajesh Venkatasubramanian
-
Publication number: 20200133843Abstract: An amount of valid data for each data block of multiple data blocks stored at a first memory is determined. An operation to write valid data of a particular data block from the first memory to a second memory is performed based on the amount of valid data for each data block. A determination is made that a threshold condition associated with when valid data of the data blocks was written to the first memory has been satisfied. In response to determining that the threshold condition has been satisfied, the operation to write valid data of the data blocks from the first memory to the second memory is performed based on when the valid data was written to the first memory.Type: ApplicationFiled: October 30, 2018Publication date: April 30, 2020Inventors: Kishore Kumar Muchherla, Peter Sean Feeley, Sampath K. Ratnam, Ashutosh Malshe, Christopher S. Hale
-
Publication number: 20200133844Abstract: A data merge method for a rewritable non-volatile memory module including a plurality of physical units is provided according to an exemplary embodiment of the disclosure. The method includes: obtaining a first logical distance value between a first physical unit and a second physical unit among the physical units, and the first logical distance value reflects a logical dispersion degree between at least one first logical unit mapped by the first physical unit and at least one second logical unit mapped by the second physical unit; and performing a data merge operation according to the first logical distance value, so as to copy valid from a source node to a recycling node.Type: ApplicationFiled: January 4, 2019Publication date: April 30, 2020Applicant: PHISON ELECTRONICS CORP.Inventors: Chien-Wen Chen, Ting-Wei Lin
-
Publication number: 20200133845Abstract: Garbage collection is performed according to an estimated number of valid pages. A storage device estimates a valid page count at a future time based on a valid page count at each of past time steps and a present time step using a neural network model and selects a victim block that undergoes the garbage collection from memory blocks based on an estimated valid page count. A memory block having a lowest estimated valid page count or having an estimated valid page count having a maintaining tendency is selected as the victim block or a memory block having the estimated valid page count having a decreasing tendency is excluded from selecting the victim block.Type: ApplicationFiled: June 19, 2019Publication date: April 30, 2020Inventors: BYEONG-HUI KIM, JUNG-MIN SEO, HYEON-GYU MIN, SEUNG-JUN YANG, JOO-YOUNG HWANG
-
Publication number: 20200133846Abstract: Techniques for efficiently purging non-active blocks in an NVM region of an NVM device using pointer elimination are provided. In one set of embodiments, a host system can, for each level 1 (L1) page table entry of each snapshot of the NVM region, determine whether a data block of the NVM region that is pointed to by the L1 page table entry is a non-active block, and if the data block is a non-active block, remove a pointer to the data block in the L1 page table entry and reduce a reference count parameter associated with the data block by 1. If the reference count parameter has reached zero at this point, the host system purge the data block from the NVM device to the mass storage device.Type: ApplicationFiled: October 29, 2018Publication date: April 30, 2020Inventors: Kiran Tati, Xavier Deguillard, Ishan Banerjee, Julien Freche, Preeti Agarwal, Rajesh Venkatasubramanian
-
Publication number: 20200133847Abstract: Techniques for efficiently purging non-active blocks in an NVM region of an NVM device while preserving large pages are provided. In one set of embodiments, a host system can receive a write request with respect to a data block of the NVM region, where the data block is referred to by a snapshot of the NVM region and was originally allocated as part of a large page. The host system can further allocate a new data block in the NVM region, copy contents of the data block to the new data block, and update the data block with write data associated with the write request. The host system can then update a level 1 (L1) page table entry of the NVM region's running point to point to the original data block.Type: ApplicationFiled: October 29, 2018Publication date: April 30, 2020Inventors: Rajesh Venkatasubramanian, Ishan Banerjee, Julien Freche, Kiran Tati, Preeti Agarwal, Xavier Deguillard
-
Publication number: 20200133848Abstract: Techniques involve managing a storage space. In response to receiving an allocation request for allocating a storage space, a storage space size and a slice size are obtained. A first storage system and a second storage system are selected from multiple storage systems, the first storage system and the second storage system includes a first storage device group and a second storage device group respectively, and the first storage device group does not overlap the second storage device group. A first slice group and a second slice group is obtained from the first storage system and the second storage system respectively, on the basis of the size of the storage space and the size of the slice. A user storage system is built at least on the basis of the first slice group and the second slice group, so as to respond to the allocation request.Type: ApplicationFiled: September 24, 2019Publication date: April 30, 2020Inventors: Xinlei Xu, Xiongcheng Li, Lifeng Yang, Geng Han, Jian Gao
-
Publication number: 20200133849Abstract: Systems and methods for storing and validating namespace metadata are disclosed. An exemplary system includes a memory component and a processing device identifying a namespace identifier associated with a first write instruction from a host process and combining the namespace identifier with a namespace offset included in the first write instruction to form a logical address. The logical address is translated into a physical address and included in a second write instruction along with data to be written and the physical address. The second write instruction is sent to a memory component causing the data to be written at the physical address, and the logical address to be stored as metadata associated with the data. The logical address may be translated using a namespace table and one or more translation tables, where the namespace table has entries including a starting location and size of a namespace in a translation table.Type: ApplicationFiled: October 30, 2018Publication date: April 30, 2020Inventors: Byron D. Harris, Karl D. Schuh
-
Publication number: 20200133850Abstract: A method for accessing two memory locations in two different memory arrays based on a single address string includes determining three sets of address bits. A first set of address bits are common to the addresses of wordlines that correspond to the memory locations in the two memory arrays. A second set of address bits concatenated with the first set of address bits provides the address of the wordline that corresponds to a first memory location in a first memory array. A third set of address bits concatenated with the first set of address bits provides the address of the wordline that corresponds to a second memory location in a second memory array. The method includes populating the single address string with the three sets of address bits and may be performed by an address data processing unit.Type: ApplicationFiled: October 30, 2018Publication date: April 30, 2020Inventors: Yew Keong Chong, Sriram Thyagarajan, Andy Wangkun Chen
-
Publication number: 20200133851Abstract: Circuitry comprises memory circuitry providing a plurality of memory locations; location selection circuitry to select a set of one or more of the memory locations by which to access a data item according to a mapping relationship between an attribute of the data item and the set of one or more memory locations; the location selection circuitry being configured to initiate an allocation operation for a data item when that data item is to be newly stored by the memory circuitry and the selected set of one or more of the memory locations are already occupied by one or more other data items, the allocation operation comprising an operation to replace at least a subset of the one or more other data items from the set of one or more memory locations by the newly stored data item; and detector circuitry to detect a data access conflict in which a group of two or more data items having different respective attributes are mapped by the mapping relationship to the same set of one or more memory locations; the locatType: ApplicationFiled: October 16, 2019Publication date: April 30, 2020Inventors: Houdhaifa BOUZGUARROU, Guillaume BOLBENES, Eddy LAPEYRE, Luc ORION
-
Publication number: 20200133852Abstract: Techniques involve managing a storage system. A target storage device is selected from multiple storage devices associated with the storage system in response to respective wear degrees of the multiple storage devices being higher than a first predetermined threshold. Regarding multiple extents in the multiple storage devices, respective access loads of the multiple extents are determined. A source extent is selected from multiple extents residing on storage devices other than the target storage device, on the basis of the respective access loads of the multiple extents. Data in the source extent are moved to the target storage device. Various storage devices in a resource pool may be prevented from reaching the end of life at close times, and further data loss may be avoided.Type: ApplicationFiled: October 16, 2019Publication date: April 30, 2020Inventors: Shuo Lv, Ming Zhang, Huan Chen
-
Publication number: 20200133853Abstract: A method of dynamic memory management in a user equipment (UE) is provided. The method includes receiving, by the UE, transport block size (TBS) information, from a base station (BS), associated with a data packet to be transmitted by the BS to the UE; identifying, by the UE, a plurality of empty bins and a size of each of the plurality of empty bins in a memory of the UE; detecting, by the UE, a presence of one or more empty bins, among the plurality of empty bins in the memory, with a size of each of the one or more empty bins greater than the TBS of the data packet; and allocating, by the UE, a smallest size empty bin, with a size greater than the TBS of the data packet, among the one or more empty bins to the data packet.Type: ApplicationFiled: October 28, 2019Publication date: April 30, 2020Inventors: Satya Kumar VANKAYALA, Dheeraj KUMAR, Satya Venkata Uma Kishore GODAVARTI, Ashok Kumar Reddy CHAWA, Vishal Devinder KARIRA, Nithin SRINIVASAN
-
Publication number: 20200133854Abstract: Provided is a neural network system for processing data transferred from an external memory. The neural network system includes an internal memory storing input data transferred from the external memory, an operator performing a multidimensional matrix operation by using the input data of the internal memory and transferring a result of the multidimensional array operation as output data to the internal memory, and a data moving controller controlling an exchange of the input data or the output data between the external memory and the internal memory. The data moving controller reorders a dimension order with respect to an access address of the external memory to generate an access address of the internal memory, for the multidimensional matrix operation.Type: ApplicationFiled: September 11, 2019Publication date: April 30, 2020Applicant: Electronics and Telecommunications Research InstituteInventors: Jeongmin YANG, Young-Su KWON
-
Publication number: 20200133855Abstract: A method and apparatus of accessing queue data is provided. According to the method, a double-layer circular queue is constructed, where the double-layer circular queue includes one or more inner-layer circular queues established based on an array, and the one or more inner-layer circular queues constitute an outer-layer circular queue of the double-layer circular queue based on a linked list. A management pointer of the outer-layer circular queue is set. Data accessing is performed on the inner-layer circular queues by using the management pointer.Type: ApplicationFiled: October 31, 2019Publication date: April 30, 2020Applicant: Hangzhou DPtech Technologies Co., Ltd.Inventor: Tian TAN
-
Publication number: 20200133856Abstract: In response to an end of track access for a track in a cache, a determination is made as to whether the track has modified data and whether the track has one or more holes. In response to determining that the track has modified data and the track has one or more holes, an input on a plurality of attributes of a computing environment in which the track is processed is provided to a machine learning module to produce an output value. A determination is made as to whether the output value indicates whether one or more holes are to be filled in the track. In response to determining that the output value indicates that one or more holes are to be filled in the track, the track is staged to the cache from a storage drive.Type: ApplicationFiled: October 25, 2018Publication date: April 30, 2020Inventors: Lokesh M. Gupta, Kyler A. Anderson, Kevin J. Ash, Matthew G. Borlick
-
Publication number: 20200133857Abstract: Techniques for processing data may include: determining a first amount denoting an amount of write pending data stored in cache to be redirected through storage class memory (SCM) when destaging cached write pending data from the cache; performing first processing that destages write pending data from the cache, the first processing including: selecting, in accordance with the first amount, a first portion of write pending data that is destaged from the cache and stored in the SCM and a second portion of write pending data that is destaged directly from the cache and stored on one or more physical storage devices providing back-end non-volatile physical storage; and subsequent to storing the first portion of write pending data to the SCM, transferring the first portion of write pending data from the SCM to the one or more physical storage devices providing back-end non-volatile physical storage.Type: ApplicationFiled: October 30, 2018Publication date: April 30, 2020Applicant: EMC IP Holding Company LLCInventors: Benjamin A. Randolph, Owen Martin
-
Publication number: 20200133858Abstract: Techniques store data. Such techniques involve: in response to receiving a request for writing data to a file, determining a type of the file; determining a compression property of the data based on the determined type; determining a storage area corresponding to the data to be written in a storage device for storing the file; and storing the data into the storage area, based on the compression property. Such techniques can determine, based on a type of a file involved in an input/output (I/O) request, a compression property of the data for the I/O request, and further determine whether a data compression operation is to be performed prior to storing the data. Accordingly, the techniques can avoid an unnecessary compression operation while reducing the storage space required for storing data as much as possible, thereby improving performance of system.Type: ApplicationFiled: September 12, 2019Publication date: April 30, 2020Inventors: Leihu Zhang, Chen Gong, Ming Zhang, Hao Fang
-
Publication number: 20200133859Abstract: A dataflow execution environment is provided with dynamic placement of cache operations and action execution ordering.Type: ApplicationFiled: October 30, 2018Publication date: April 30, 2020Inventors: Vinícius Michel Gottin, Fábio André Machado Porto, Yania Molina Souto
-
Publication number: 20200133860Abstract: Embodiments of the present disclosure provide a method, device and computer program product for validating a cache file. In an embodiment, a reference cache file associated with the backed up data is divided into a plurality of reference segments. Reference check information is generated for the respective reference segments of the plurality of reference segments, and the generated reference check information is stored. In response to the initiating of a backup job, the stored reference check information is used to validate the cache file.Type: ApplicationFiled: February 25, 2019Publication date: April 30, 2020Inventors: Wei Chen, Yi Wang, Qingxiao Zheng
-
Publication number: 20200133861Abstract: Methods and systems for managing synonyms in VIPT caches are disclosed. A method includes tracking lines of a copied cache using a directory, examining a specified bit of a virtual address that is associated with a load request and determining its status and making an entry in one of a plurality of parts of the directory based on the status of the specified bit of the virtual address that is examined. The method further includes updating one of, and invalidating the other of, a cache line that is associated with the virtual address that is stored in a first index of the copied cache, and a cache line that is associated with a synonym of the virtual address that is stored at a second index of the copied cache, upon receiving a request to update a physical address associated with the virtual address.Type: ApplicationFiled: December 23, 2019Publication date: April 30, 2020Inventor: Karlhikeyan Avudaiyappan
-
Publication number: 20200133862Abstract: Various aspects are described herein. In some aspects, the disclosure provides techniques for accessing tag information in a memory line. The techniques include determining an operation to perform on at least one memory line of a memory. The techniques further include performing the operation by accessing only a portion of the at least one memory line, wherein the only the portion of the at least one memory line comprises one or more flag bits that are independently accessible from remaining bits of the at least one memory line.Type: ApplicationFiled: October 29, 2018Publication date: April 30, 2020Inventors: Bharat Kumar RANGARAJAN, Chulmin JUNG, Rakesh MISRA
-
Publication number: 20200133863Abstract: An apparatus is provided that includes cache circuitry that comprises a plurality of cache lines. The cache circuitry treats one or more of the cache lines as trace lines each having correlated addresses and each being tagged by a trigger address. Prefetch circuitry causes data at the correlated addresses stored in the trace lines to be prefetched.Type: ApplicationFiled: October 31, 2018Publication date: April 30, 2020Inventors: Joseph Michael PUSDESRIS, Miles Robert DOOLEY, Michael FILIPPO
-
Publication number: 20200133864Abstract: Techniques manage an input/output (I/O) operation. Such techniques involve estimating a first storage area in a storage device to be accessed by an upcoming random I/O operation, first data being stored in the estimated first storage area. Such techniques further involve, before the random I/O operation is executed, pre-fetching the first data from the first storage area into a cache associated with the storage device. Such techniques enable implementation of the cache pre-fetch for random I/O operations, thereby effectively improving the performance of data access.Type: ApplicationFiled: September 12, 2019Publication date: April 30, 2020Inventors: Lifeng Yang, Ruiyong Jia, Xinlei Xu, Yousheng Liu, Jian Gao
-
Publication number: 20200133865Abstract: An interconnect system and method of operating the system are disclosed. A master device has access to a cache and a slave device has an associated data storage device for long-term storage of data items. The master device can initiate a cache maintenance operation in the interconnect system with respect to a data item temporarily stored in the cache causing action to be taken by the slave device with respect to storage of the data item in the data storage device. For long latency operations the master device can issue a separated cache maintenance request specifying the data item and the slave device. In response an intermediate device signals an acknowledgment response indicating that it has taken on responsibility for completion of the cache maintenance operation and issues the separated cache maintenance request to the slave device.Type: ApplicationFiled: October 29, 2018Publication date: April 30, 2020Inventors: Phanindra Kumar MANNAVA, Bruce James MATHEWSON, Jamshed JALAL, Paul Gilbert MEYER
-
Publication number: 20200133866Abstract: The disclosure herein provides techniques for designing cache compression algorithms that control how data in caches are compressed. The techniques generate a custom “byte select algorithm” by applying repeated transforms applied to an initial compression algorithm until a set of suitability criteria is met. The suitability criteria include that the “cost” is below a threshold and that a metadata constraint is met. The “cost” is the number of blocks that can be compressed by an algorithm as compared with the “ideal” algorithm. The metadata constraint is the number of bits required for metadata.Type: ApplicationFiled: October 31, 2018Publication date: April 30, 2020Applicant: Advanced Micro Devices, Inc.Inventors: Shomit N. Das, Matthew Tomei, David A. Wood
-
Publication number: 20200133867Abstract: Techniques provide cache service in a storage system. Such techniques involve a storage cell pool, a cache and an underlying storage system. The storage cell pool includes multiple storage cells, a storage cell among the multiple storage cells being mapped to a physical address in the underlying storage system via an address mapping of the storage system. Specifically, an access request for target data at a virtual address in the storage cell pool is received, and the type of the access request is determined. The access request is served with the cache on the basis of the determined type, where the cache is used to cache data according to a format of a storage cell in the storage cell pool. The cache directly stores data in various storage cells in the pool that is visible to users, so that response speed for the access request may be increased.Type: ApplicationFiled: September 17, 2019Publication date: April 30, 2020Inventors: Ruiyong Jia, Xinlei Xu, Lifeng Yang, Yousheng Liu, Changyu Feng