Patents Issued in December 28, 2017
-
Publication number: 20170371748Abstract: A system is provided for creating selective snapshots of a database that is stored as one or more segments, wherein a segment comprises one or more memory pages. The system includes a memory storage comprising instructions and one or more processors in communication with the memory. The one or more processors execute the instructions to determine whether a snapshot process is configured to access a selected segment of the one or more segments, assign a positive mapping status to an accessed segment for which the determining unit has determined that it is accessed by the snapshot process and to assign a negative mapping status to a non-accessed segment, and create a snapshot comprises a step of forking the snapshot process with an address space that comprises a subset of the one or more segments.Type: ApplicationFiled: September 11, 2017Publication date: December 28, 2017Applicant: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Eliezer LEVY, Antonios ILIOPOULOS, Shay GOIKHMAN, Israel GOLD
-
Publication number: 20170371749Abstract: An example apparatus includes a virtual drive controller module to receive a read request from a guest virtual machine (VM) during a restore operation. The apparatus also includes a virtual drive manager module to determine whether data associated with the read request is stored in a storage volume of the guest VM using a sector mapping lookup table during the restore operation. In response to a determination that the data is absent in the storage volume, the virtual drive manager module is to copy the data from a backup image associated with the guest VM to the storage volume, update the sector mapping lookup table to indicate that the data is stored in the storage volume, and transmit the data to the guest VM.Type: ApplicationFiled: January 30, 2015Publication date: December 28, 2017Inventors: TARAM SIERRA DEVITT-CAROLAN, RICK BRAMLEY, RENY PAUL, ANIL KUMAR S R
-
Publication number: 20170371750Abstract: According to at least one aspect, a database system is provided. The database system includes at least one processor configured to receive a restore request to restore a portion of a dataset to a previous state and, responsive to receipt of the restore request, identify at least one snapshot from a plurality of snapshots of at least some data in the dataset to read based on the restore request and write a portion of the data in the identified at least one snapshot to the dataset to restore the portion of the dataset to the previous state.Type: ApplicationFiled: June 20, 2017Publication date: December 28, 2017Inventors: Eliot Horowitz, Rostislav Briskin, Daniel William Gottlieb
-
Publication number: 20170371751Abstract: A database recovery and index rebuilding method involves reading data pages for a database to be recovered as recovery bases; retrieving all log records from stored post-backup updates and sorting the retrieved log records; as the data pages to be recovered are read, applying the sorted log records to their respective data pages; as the applying completes for individual data pages, extracting and sorting index keys from the individual data pages for which the applying is complete, until all index keys have been extracted from all individual data pages and sorted; on an individual recovered page basis, writing the recovered individual data pages into the database; and when all index keys have been extracted and sorted from all of the recovered individual data pages, rebuilding indexes of the database using the sorted index keys and writing the rebuilt indexes to the non-transitory storage.Type: ApplicationFiled: September 15, 2017Publication date: December 28, 2017Inventors: Jeffrey Berger, William J. Franklin, Laura M. Kunioka-Weis, Thomas Majithia, Haakon P. Roberts
-
Publication number: 20170371752Abstract: A method, computer program product, and computer system for monitoring health of at least one storage device of a cache in a clustered system. A recovery journal may be maintained, wherein the recovery journal may identify whether one or more chunks of data stored in the cache have been dumped from the at least one storage device to persistent storage in the clustered system. A state of the at least one storage device may be determined based upon, at least in part, the health of the at least one storage device. A recovery action may be performed on the one or more chunks of data stored in the at least one storage device based upon, at least in part, the state of the at least one storage device.Type: ApplicationFiled: March 2, 2017Publication date: December 28, 2017Inventors: MIKHAIL DANILOV, Andrey Fomin, Mikhail Malygin, Vladimir Prikhodko, Alexander Rakulenko, Maxim Trusov
-
Publication number: 20170371753Abstract: Provided are a memory apparatus for applying fault repair based on a physical region and a virtual region and a control method thereof. That is, the fault repair is applied based on the physical region and the virtual region which use an information storage table of a virtual basic region using a hash function, thereby improving efficiency of the fault repair.Type: ApplicationFiled: January 30, 2017Publication date: December 28, 2017Applicant: Korea University Research and Business FoundationInventors: Seon Wook Kim, Ho Kwon Kim, Jae Yung Jun, Kyu Hyun Choi
-
Publication number: 20170371754Abstract: Described is a differential data bus system which maintains error free communication despite faults in one of the data bus lines.Type: ApplicationFiled: December 20, 2015Publication date: December 28, 2017Inventor: Ofer Hofman
-
Publication number: 20170371755Abstract: A non-volatile memory system includes a plurality of non-volatile data memory cells arranged into groups of data memory cells, a plurality of select devices connected to the groups of data memory cells, a selection line connected to the select devices, a plurality of data word lines connected to the data memory cells, and one or more control circuits connected to the selection line and the data word lines. The one or more control circuits are configured to determine whether the select devices are corrupted. If the select devices are corrupted, then the one or more control circuits repurpose one of the word lines (e.g., the first data word line closet to the select devices) to be another selection line, thus operating the memory cells connected to the repurposed word line as select devices.Type: ApplicationFiled: June 23, 2016Publication date: December 28, 2017Applicant: SANDISK TECHNOLOGIES LLCInventors: Nian Niles Yang, Jiahui Yuan, Grishma Shah, Xinde Hu, Lanlan Gu, Bin Wu
-
Publication number: 20170371756Abstract: Various examples described herein provide for monitoring a peripheral device by data imported from the peripheral device. The peripheral data may comprise a script associated with monitoring or managing the peripheral device, or descriptive data describing a set of monitor values on the peripheral device.Type: ApplicationFiled: June 23, 2016Publication date: December 28, 2017Inventors: Thomas Hanson, Justin E. York, Kenneth C. Duisenberg
-
Publication number: 20170371757Abstract: A system monitoring method and apparatus comprises: collecting periodically status indicator data of a monitored system to generate a status indicator data sequence; selecting predetermined pieces of status indicator data according to data collecting time in a reverse chronological order; determining a category from predetermined categories, the predetermined pieces of status indicator data belonging to the determined category; selecting, from the historical status indicator data, status indicator data belonging to the determined category and obtained in a collection period as characteristic data of the determined category; calculating a predicted value of a status indicator of the system at a predicting moment using the characteristic data; and determining whether the system is abnormal, based on a difference between the calculated predicted value and a true value of the status indicator of the system collected at the predicting moment.Type: ApplicationFiled: September 29, 2016Publication date: December 28, 2017Applicant: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY, LTD.Inventors: Dong Wang, Bo Wang, Beibei Miao, Yun Chen, Xuanyou Guo, Xianping Qu
-
Publication number: 20170371758Abstract: A method, performed by a computing device, includes (a) building a data structure that describes dependence relationships between components of a virtual appliance, the components comprising respective computational processes which may be invoked during booting, a dependence relationship indicating that one component must complete before a second component may be invoked, (b) identifying, with reference to the data structure and an essential set of components which were pre-defined to be essential to the virtual appliance, a set of components that must complete for booting to be considered finished, and, after identifying the required set of components, repeatedly (c) querying each required component for its respective completion status, (d) calculating an estimated completion percentage for booting the virtual appliance with reference to the respective completion statuses of each required component versus all required components, and (e) displaying an indication of the completion percentage to a user via a uType: ApplicationFiled: July 25, 2017Publication date: December 28, 2017Inventors: Victoria Vladimirovna Cherkalova, Dmitry Vladimirovich Krivenok
-
Publication number: 20170371759Abstract: Aspects extend to methods, systems, and computer program products for intent-based interactions with cluster resources. One or more computer systems are joined in a computer system cluster to provide defined computing functionality (e.g., storage, compute, network, etc.) to an external system. In one aspect, a data collection intent facilitates collection and aggregation of data to form a health report for one or more components of the computer system cluster. In another aspect, a command intent facilitates implementing a command at one or more components of the computer system cluster. Services span machines of the computer system cluster to abstract lower level aspects of data collection and aggregation and command implementation for higher level aspects of data collection and aggregation and command implementation. Services can be integrated into an operating system to relieve users from having to have operating system knowledge.Type: ApplicationFiled: June 24, 2016Publication date: December 28, 2017Inventors: Alexandre Joseph Francois Allaire, Noah Aaron Cedar Davidson, Alexander Say Go
-
Publication number: 20170371760Abstract: Described are advanced communication computers that include a processor, at least one network adaptor connected to the processor, wherein the at least one network adaptor comprises a separate processor, at least one remote network connected to the at least one network adaptor, and at least one remote server connected to the at least one remote network. The processor is configured to identify an expected performance level of the at least one network adaptor, collect actual performance data from the at least one processor, and compare the actual performance data to the expected performance level to identify issues with signal condition, network traffic, interference, and other similar metrics.Type: ApplicationFiled: June 21, 2017Publication date: December 28, 2017Inventor: Martin Kuster
-
Publication number: 20170371761Abstract: Systems, apparatuses, and methods for performing real-time tracking of performance targets using dynamic compilation. A performance target is specified in a service level agreement. A dynamic compiler analyzes a software application executing in real-time and determine which high-level application metrics to track. The dynamic compiler then inserts instructions into the code to increment counters associated with the metrics. A power optimization unit then utilizes the counters to determine if the system is currently meeting the performance target. If the system is exceeding the performance target, then the power optimization unit reduces the power consumption of the system while still meeting the performance target.Type: ApplicationFiled: June 24, 2016Publication date: December 28, 2017Inventors: Leonardo Piga, Brian J. Kocoloski, Wei Huang, Abhinandan Majumdar, Indrani Paul
-
Publication number: 20170371762Abstract: Techniques provide a framework for dynamic globalization enablement for an application during software development. A globalization development operation information system (GDOIS) retrieves source code for the application, which is assigned to support specified globalization features. The GDOIS evaluates the source code for each of the plurality of specified globalization features. Upon determining that the source code does not include at least a first specified globalization feature, the GDOIS identifies an application programming interface (API) associated with the feature. The GDOIS inserts source code associated with the API into the source code for the application.Type: ApplicationFiled: June 28, 2016Publication date: December 28, 2017Inventors: Syed HAIDERZAIDI, Su LIU, Boyi TZEN, Cheng XU
-
Publication number: 20170371763Abstract: Techniques are disclosed for providing dynamic globalization enablement for developing an application during software development. A globalization development operation information system (GDOIS) retrieves source code for the application, which is assigned to support specified globalization features. The GDOIS evaluates the source code for each of the plurality of specified globalization features. Upon determining that the source code does not include at least a first specified globalization feature, the GDOIS identifies an application programming interface (API) associated with the feature. The GDOIS inserts source code associated with the API into the source code for the application.Type: ApplicationFiled: June 28, 2016Publication date: December 28, 2017Inventors: Syed HAIDERZAIDI, Su LIU, Boyi TZEN, Cheng XU
-
Publication number: 20170371764Abstract: Techniques provide a framework for dynamic globalization enablement for an application during software development. A globalization development operation information system (GDOIS) retrieves source code for the application, which is assigned to support specified globalization features. The GDOIS evaluates the source code for each of the plurality of specified globalization features. Upon determining that the source code does not include at least a first specified globalization feature, the GDOIS identifies an application programming interface (API) associated with the feature. The GDOIS inserts source code associated with the API into the source code for the application.Type: ApplicationFiled: January 24, 2017Publication date: December 28, 2017Inventors: Syed HAIDERZAIDI, Su LIU, Boyi TZEN, Cheng XU
-
Publication number: 20170371765Abstract: An automated end-to-end analysis of customer service requests is disclosed. A core dump is received, wherein the core dump corresponds to a customer service request regarding a crash of a computer system. The core dump is automatically analyzed with a processor to generate analysis results. A graphical representation for display on a graphic user interface of a computer is generate, wherein the graphical representation corresponds to the analysis results for the core dump.Type: ApplicationFiled: January 25, 2017Publication date: December 28, 2017Applicant: VMware, Inc.Inventors: Sowgandh Sunil GADI, Naveen Prakash RAO, Travis FINCH, Ayoob KHAN
-
Publication number: 20170371766Abstract: An automated end-to-end analysis of customer service requests is disclosed. A core dump is received, wherein the core dump corresponds to a customer service request regarding a crash of a computer system. A processor automatically analyzes the core dump to determine if a pcpu lockup of the computer system is due to a software issue. Provided the pcpu lockup of the computer system is due to the software issue, the processor determines which thread is a culprit thread responsible for the pcpu lockup of the computer system.Type: ApplicationFiled: January 25, 2017Publication date: December 28, 2017Applicant: VMware, Inc.Inventors: Sowgandh Sunil GADI, Hariprakash GOVINDARAJALU
-
Publication number: 20170371767Abstract: Embodiments of the present invention provide a method, computer program product, and system for debugging optimized code. The system includes a FAT binary, wherein the FAT binary comprises a non-optimized native code and an internal representation of a program's source code. An optimus program is configured to transform the internal representation of the program's source code into a fully optimized native code. The system also includes an enhanced loader, wherein the enhanced loader is configured to communicate with a debugger to determine a type of code to load.Type: ApplicationFiled: September 15, 2017Publication date: December 28, 2017Inventors: Michael J. Moniz, Ali I. Sheikh, Diana P. Sutandie, Srivatsan Vijayakumar, Ying Di Zhang
-
Publication number: 20170371768Abstract: A method finding the root cause of errors and/or unexpected behavior of a monitored software application, the method comprising: providing a decision tree corresponding to an error and/or unexpected behavior of a software application, wherein the decision tree comprising multiple nodes, wherein the decision tree further comprising one or more leaf nodes, wherein the leaf nodes indicates at least one reason and one or more possible solutions for the error and/or unexpected behavior; scanning one or more log-files of a software application; determining, based on the decision tree and the scanned log files which step has been not performed by the software application, wherein the non-performed step being indicative for an error and/or unexpected behavior of the software application; determining a leaf node based on the determined non-performed step; extracting information from the leaf node; and providing a reason and/or a solution of the error and/or unexpected behavior.Type: ApplicationFiled: June 23, 2016Publication date: December 28, 2017Inventors: Chitra A. Iyer, Angelika Kozuch, Krzysztof Rudek, Vinod A. Valecha
-
Publication number: 20170371769Abstract: A core includes a memory buffer and executes an instruction within a virtual machine. A processor tracer captures trace data and formats the trace data as trace data packets. An event-based sampler generates field data for a sampling record in response to occurrence of an event of a certain type as a result of execution of the instruction. The processor tracer, upon receipt of the field data: formats the field data into elements of the sampling record as a group of record packets; inserts the group of record packets between the trace data packets as a combined packet stream; and stores the combined packet stream in the memory buffer as a series of output pages. The core, when in guest profiling mode, executes a virtual machine monitor to map output pages of the memory buffer to host physical pages of main memory using multilevel page tables.Type: ApplicationFiled: June 28, 2016Publication date: December 28, 2017Inventors: Matthew C. Merten, Beeman C. Strong, Michael W. Chynoweth, Grant G. Zhou, Andreas Kleen, Kimberly C. Weier, Angela D. Schmid, Stanislav Bratanov, Seth Abraham, Jason W. Brandt, Ahmad Yasin
-
Publication number: 20170371770Abstract: A static analysis tool configured to determine a significance of static analysis results. The static analysis tool includes computer program code to perform a static analysis of a computer program and generate the static analysis results in response to the performance of the static analysis of the computer program. The program code can further analyze a description of a result item from the static analysis results, and based on the analysis of the description of the result item, assigning to the result item information from an ontology scheme. The program code can further include code determine a significance value for the result item in response to the assignment of the information from the ontology scheme and automatically perform an action associated with the result item based on one or more of the information assigned from the ontology scheme or the significance value.Type: ApplicationFiled: June 27, 2016Publication date: December 28, 2017Inventors: Fionnuala G. Gunter, Christy L. Norman Perez, Michael T. Strosaker, George C. Wilson
-
Publication number: 20170371771Abstract: Embodiments include methods, and adaptive testing systems, and computer program products for performing adaptive testing using one or more system resources of a computer system dynamically determined from a platform on which a test program is executing. Aspects include: test program sending a resource query based on certain criteria to a resource query module to inquire one or more available system resources of computer system, the resource query module using certain operating system commands of computer system to determine appropriate system resources available for use on the computer system, the computer system returning the appropriate system resources determined on the computer system to the resource query module of the test program, the test program deciding one or more system resources that best meet a need of the test program, and the test program performing the adaptive testing on the computer system based on the one or more system resources decided.Type: ApplicationFiled: June 28, 2016Publication date: December 28, 2017Inventors: Heather M. Bosko, Deborah A. Furman, Bradley M. Messer, Anthony T. Sofia
-
Publication number: 20170371772Abstract: The systems and methods that determine tests that may be executed in parallel during regression testing of an analytics application are provided. Multiple tests that test functions of the analytics application are accessed from a test automation suite. For each test, data sources that provide data to the analytics application during the test are identified. The tests are aggregated into temporary groups according to the identified data sources. The test groups are generated from the temporary groups such that each test group comprises tests that are associated with non-overlapping data sources. The regression testing is performed on the application by executing the test groups in parallel.Type: ApplicationFiled: June 27, 2016Publication date: December 28, 2017Inventors: Eyal Koplovich, Lucia Lifschitz, Amir Emanueli
-
Publication number: 20170371773Abstract: Systems, methods, and computer-readable media for optimizing the execution order of a set of test programs that includes at least one system interval dependent test program are disclosed. The optimized execution order may be determined by identifying each non-system interval dependent test program that can be executed during each instance of a system interval without impacting execution of system interval dependent test programs. The optimized execution order minimizes a total execution time of the set of test programs.Type: ApplicationFiled: June 22, 2016Publication date: December 28, 2017Inventors: Joseph W. Gentile, Brian D. Keuling, Anthony T. Sofia
-
Publication number: 20170371774Abstract: The technique herein substantially improves productivity of Annotator developers by providing methods and systems to develop and test Annotators without having to run a full pipeline every time changes are made to a particular Annotator. To this end, preferably a running pipeline is instrumented to enable automated recording of static configuration and dynamically-generated event data as the pipeline is executed. Based on these data, a reusable data model is generated that captures code and other dependencies in the pipeline (e.g., configuration parameters, intermediary CASes, program flow, annotations, and the like). The data model is then used to facilitate testing of Annotators without using the full pipeline (or even major sub-pipelines therein).Type: ApplicationFiled: August 15, 2017Publication date: December 28, 2017Inventors: Christopher James Karle, William Graham O'Keeffe, David Deidou Taieb
-
Publication number: 20170371775Abstract: Technologies for bridging trace gaps include a computing device that traces execution of a program to generate an execution trace and identifies a trace gap in the execution trace. The computing device generates a first call stack that corresponds to a location immediately before the trace gap and a second call stack that corresponds to a location immediately after the trace gap. Each call stack identifies a list of functions, and each function corresponds to a source function of the program. The computing device evaluates connection pairs between the first call stack and the second call stack to determine whether each connection pair is valid and, for each valid connection pair, a number of matching functions. The computing device selects a connection pair that is valid and has a largest number of matching functions and bridges the trace gap with the selected connection pair. Other embodiments are described and claimed.Type: ApplicationFiled: June 28, 2016Publication date: December 28, 2017Inventor: Markus T. Metzger
-
Publication number: 20170371776Abstract: According to an example, a fabric manager server may migrate data stored in a dual-interface non-volatile dual in-line memory module (NVDIMM) of a memory application server. The fabric manager server may receive data routing preferences for a memory fabric and retrieve the data stored in universal memory of the dual-port NVDIMM according to the data routing preferences through a second port of the dual-port NVDIMM. The retrieved data may then be routed from the dual-port NVDIMM for replication to remote storage according to the data routing preferences. Once the retrieved data is replicated to remote storage, the fabric manager may alert the dual-port NVDIMM.Type: ApplicationFiled: April 30, 2015Publication date: December 28, 2017Inventors: Dwight D. Riley, Thierry Fevrier, Joseph E. Foster
-
Publication number: 20170371777Abstract: In a computer system having multiple memory proximity domains including a first memory proximity domain with a first processor and a first memory and a second memory proximity domain with a second processor and a second memory, latencies of memory access from each memory proximity domain to its local memory as well as to memory at other memory proximity domains are probed. When there is no contention, the local latency will be lower than remote latency. If the contention at the local memory proximity domain increases and the local latency becomes large enough, memory pages associated with a process running on the first processor are placed in the second memory proximity domain, so that after the placement, the process is accessing the memory pages from the memory of the second memory proximity domain during execution.Type: ApplicationFiled: June 23, 2016Publication date: December 28, 2017Inventors: Seongbeom KIM, Jagadish KOTRA, Fei GUO
-
Publication number: 20170371778Abstract: Methods and apparatus for reliable distributed messaging are described. A computer system includes a system memory coupled to one or more processors. The system memory comprises at least a non-volatile portion. A particular location within the non-volatile portion is designated as a target location to which a sender module participating in a communication protocol is granted write permission. A receiver module participating in the communication protocol, subsequent to a failure event that results in a loss of data stored in a volatile portion of the system memory, reads a data item written by the sender program at the target location prior to the failure event. The receiver module performs an operation based on contents of the data item.Type: ApplicationFiled: August 20, 2017Publication date: December 28, 2017Applicant: Amazon Technologies, Inc.Inventors: Samuel James McKelvie, Anurag Windlass Gupta
-
Publication number: 20170371779Abstract: In one embodiment, an apparatus comprises a storage device comprising a NAND flash memory. The storage device is to receive a write request from a computing host, the write request to specify data to be written to the NAND flash memory; perform a number of program loops to program the data into a plurality of cells of the NAND flash memory, wherein a program loop comprises application of a program voltage to a wordline of the memory to change the threshold voltage of at least one cell of the plurality of cells; and wherein the number of program loops is to be determined prior to receipt of the write request and based on a distribution of threshold voltages of the cells or determined based on tracking a number of program errors for only a portion of the plurality of cells.Type: ApplicationFiled: June 28, 2016Publication date: December 28, 2017Applicant: Intel CorporationInventors: Shantanu R. Rajwade, Andrea D'alessandro, Pranav Kalavade, Violante Moschiano
-
Publication number: 20170371780Abstract: A solid-state drive (SSD) is configured for dynamic resizing. When the SSD approaches the end of its useful life because the over-provisioning amount is nearing the minimum threshold as a result of an increasing number of bad blocks, the SSD is reformatted with a reduced logical capacity so that the over-provisioning amount may be maintained above the minimum threshold.Type: ApplicationFiled: September 8, 2017Publication date: December 28, 2017Inventor: Daisuke HASHIMOTO
-
Publication number: 20170371781Abstract: A memory controller is for controlling operations of a nonvolatile memory including a first memory block group for storing a first type of data and a second memory block group for storing a second type of data.Type: ApplicationFiled: June 28, 2016Publication date: December 28, 2017Inventor: IN-HWAN CHOI
-
Publication number: 20170371782Abstract: In some examples, techniques for virtual storage includes configuring a single physical storage disk to a virtual storage device that includes a plurality of virtual storage disks from a command specifying storage characteristics of the virtual storage disks including number of virtual storage disks, virtual strip size, and fault tolerance of the virtual storage disks. In response to access the virtual storage disks and specifying a virtual storage disk Logical Block Address (LBA), converting the virtual storage disk LBA into a physical storage disk LBA based on a sum of factors that includes a first factor comprising a modulo of the virtual storage disk LBA and virtual strip size, a second factor comprising number of virtual storage disks multiplied by a result of the virtual storage disk LBA minus the first factor, and a third factor comprising a virtual storage disk number multiplied by the virtual strip size.Type: ApplicationFiled: January 21, 2015Publication date: December 28, 2017Inventors: Nathaniel S DeNeui, Joseph David Black
-
Publication number: 20170371783Abstract: Self-aware, peer-to-peer cache transfers between local, shared cache memories in a multi-processor system is disclosed. A shared cache memory system is provided comprising local shared cache memories accessible by an associated central processing unit (CPU) and other CPUs in a peer-to-peer manner. When a CPU desires to request a cache transfer (e.g., in response to a cache eviction), the CPU acting as a master CPU issues a cache transfer request. In response, target CPUs issue snoop responses indicating their willingness to accept the cache transfer. The target CPUs also use the snoop responses to be self-aware of the willingness of other target CPUs to accept the cache transfer. The target CPUs willing to accept the cache transfer use a predefined target CPU selection scheme to determine its acceptance of the cache transfer. This can avoid a CPU making multiple requests to find a target CPU for a cache transfer.Type: ApplicationFiled: June 24, 2016Publication date: December 28, 2017Inventors: Hien Minh Le, Thuong Quang Truong, Eric Francis Robinson, Brad Herold, Robert Bell, JR.
-
Publication number: 20170371784Abstract: A processing system includes one or more first caches and one or more first lock tables associated with the one or more first caches. The processing system also includes one or more processing units that each include a plurality of compute units for concurrently executing work-groups of work items, a plurality of second caches associated with the plurality of compute units and configured in a hierarchy with the one or more first caches, and a plurality of second lock tables associated with the plurality of second caches. The first and second lock tables indicate locking states of addresses of cache lines in the corresponding first and second caches on a per-line basis.Type: ApplicationFiled: June 24, 2016Publication date: December 28, 2017Inventors: Johnathan R. Alsop, Bradford Beckmann
-
Publication number: 20170371785Abstract: Examples include techniques for a write commands to one or more storage devices coupled with a host computing platform. In some examples, the write commands may be responsive to write requests from applications hosted or supported by the host computing platform. A tracking table is utilized by elements of the host computing platform and the one or more storage devices such that the write commands are completed by the one or more storage devices without a need for an interrupt response to elements of the host computing platform.Type: ApplicationFiled: June 28, 2016Publication date: December 28, 2017Applicant: Intel CorporationInventors: James A. Boyd, John W. Carroll, Sanjeev N. Trika, Mark A. Schmisseur
-
Publication number: 20170371786Abstract: A processing system includes a plurality of processor cores and a plurality of private caches. Each private cache is associated with a corresponding processor core of the plurality of processor cores and includes a corresponding first set of cachelines. The processing system further includes a shared cache shared by the plurality of processor cores. The shared cache includes a second set of cachelines, and a shadow tag memory including a plurality of entries, each entry storing state information for a corresponding cacheline of the first set of cachelines of one of the private caches.Type: ApplicationFiled: June 23, 2016Publication date: December 28, 2017Inventors: Sriram Srinivasan, William L. Walker
-
Publication number: 20170371787Abstract: A system and method for network traffic management between multiple nodes are described. A computing system includes multiple nodes connected to one another. When a home node determines a number of nodes requesting read access for a given data block assigned to the home node exceeds a threshold and a copy of the given data block is already stored at a first node of the multiple nodes in the system, the home node sends a command to the first node. The command directs the first node to forward a copy of the given data block to the home node. The home node then maintains a copy of the given data block and forwards copies of the given data block to other requesting nodes until the home node detects a write request or a lock release request for the given data block.Type: ApplicationFiled: June 24, 2016Publication date: December 28, 2017Inventors: Vydhyanathan Kalyanasundharam, Eric Christopher Morton, Amit P. Apte, Elizabeth M. Cooper
-
Publication number: 20170371788Abstract: A method for processing commands in a directory-based computer memory management system includes receiving a command to perform an operation on data stored in a set of one or more computer memory locations associated with an entry in a directory of a computer memory, the entry is associated with an indicator for indicating whether the set of one or more computer memory locations is busy, a head tag, and a tail tag. The command is associated with a command tag and a predecessor tag, and checking the indicator to determine whether the set of one or more computer memory locations is busy.Type: ApplicationFiled: June 27, 2016Publication date: December 28, 2017Inventors: Michael Bar-Joshua, Yiftach Benjamini, Yaakov Gendel, Eyal Gonen, Alexander Mesh
-
Publication number: 20170371789Abstract: A technique for operating a memory management unit (MMU) of a processor includes the MMU detecting that one or more address translation invalidation requests are indicated for an accelerator unit (AU). In response to detecting that the invalidation requests are indicated, the MMU issues a raise barrier request for the AU. In response to detecting a raise barrier response from the AU to the raise barrier request the MMU issues the invalidation requests to the AU. In response to detecting an address translation invalidation response from the AU to each of the invalidation requests, the MMU issues a lower barrier request to the AU. In response to detecting a lower barrier response from the AU to the lower barrier request, the MMU resumes handling address translation check-in and check-out requests received from the AU.Type: ApplicationFiled: June 23, 2016Publication date: December 28, 2017Inventors: BARTHOLOMEW BLANER, JAY G. HEASLIP, ROBERT D. HERZL, JODY B. JOYNER
-
Publication number: 20170371790Abstract: Next line prefetchers employing initial high prefetch prediction confidence states for throttling next line prefetches in processor-based system are disclosed. Next line prefetcher prefetches a next memory line into cache memory in response to read operation. To mitigate prefetch mispredictions, next line prefetcher is throttled to cease prefetching after prefetch prediction confidence state becomes a no next line prefetch state indicating number of incorrect predictions. Instead of initial prefetch prediction confidence state being set to no next line prefetch state, which is built up in response to correct predictions before performing a next line prefetch, initial prefetch prediction confidence state is set to next line prefetch state to allow next line prefetching. Thus, next line prefetcher starts prefetching next lines before requiring correct predictions to be “built up” in prefetch prediction confidence state.Type: ApplicationFiled: June 24, 2016Publication date: December 28, 2017Inventors: Brandon Dwiel, Rami Mohammad Al Sheikh
-
Publication number: 20170371791Abstract: An apparatus having a memory controller is described. The memory controller includes prefetch circuitry to prefetch, from a memory, data having a same row address in response to the memory controller's servicing of its request stream being stalled because of a timing constraint that prevents a change in row address. The memory controller also includes a cache to cache the prefetched data. The memory controller also includes circuitry to compare addresses of read requests in the memory controller's request stream against respective addresses of the prefetched data in the cache and to service those of the requests in the memory controller's request stream having a matching address with corresponding ones of the prefetched data in the cache.Type: ApplicationFiled: June 28, 2016Publication date: December 28, 2017Inventors: Ashish RANJAN, Vivek KOZHIKKOTTU
-
Publication number: 20170371792Abstract: In an aspect, high priority lines are stored starting at an address aligned to a cache line size for instance 64 bytes, and low priority lines are stored in memory space left by the compression of high priority lines. The space left by the high priority lines and hence the low priority lines themselves are managed through pointers also stored in memory. In this manner, low priority lines contents can be moved to different memory locations as needed. The efficiency of higher priority compressed memory accesses is improved by removing the need for indirection otherwise required to find and access compressed memory lines, this is especially advantageous for immutable compressed contents.Type: ApplicationFiled: June 24, 2016Publication date: December 28, 2017Inventors: Andres Alejandro OPORTUS VALENZUELA, Nieyan GENG, Christopher Edward KOOB, Gurvinder Singh CHHABRA, Richard SENIOR, Anand JANAKIRAMAN
-
Publication number: 20170371793Abstract: Cache line data and metadata are compressed and stored in first and, optionally, second memory regions, the metadata including an address tag When the compressed data fit entirely within a primary block in the first memory region, both data and metadata are retrieved in a single memory access. Otherwise, overflow data is stored in an overflow block in the second memory region. The first and second memory regions may be located in the same row of a DRAM, for example, or in different regions of a DRAM and may be configured to enable standard DRAM components to be used. Compression and decompression logic circuits may be included in a memory controller.Type: ApplicationFiled: June 28, 2016Publication date: December 28, 2017Applicant: ARM LimitedInventors: Ali SAIDI, Kshitij SUDAN, Andrew Joseph RUSHING, Andreas HANSSON, Michael FILIPPO
-
SYSTEM AND METHOD FOR DYNAMIC OPTIMIZATION FOR BURST AND SUSTAINED PERFORMANCE IN SOLID STATE DRIVES
Publication number: 20170371794Abstract: A method and information handling system configured to executing instructions of an SSD dynamic optimization buffer switching system and configured to detecting SSD storage capacity utilization via an SSD controller. The method and information handling system further configured to reallocate buffer capacity from write acceleration buffer capacity to garbage collection buffer capacity to increase buffer availability for garbage collection when SSD storage capacity utilization exceeds a threshold level.Type: ApplicationFiled: June 28, 2016Publication date: December 28, 2017Applicant: Dell Products, LPInventor: Lip Vui Kan -
Publication number: 20170371795Abstract: An apparatus is described that includes a memory controller to interface to a multi-level system memory. The memory controller includes least recently used (LRU) circuitry to keep track of least recently used cache lines kept in a higher level of the multi-level system memory. The memory controller also includes idle time predictor circuitry to predict idle times of a lower level of the multi-level system memory. The memory controller is to write one or more lesser used cache lines from the higher level of the multi-level system memory to the lower level of the multi-level system memory in response to the idle time predictor circuitry indicating that an observed idle time of the lower level of the multi-level system memory is expected to be long enough to accommodate the write of the one or more lesser used cache lines from the higher level of the multi-level system memory to the lower level of the multi-level system memory.Type: ApplicationFiled: June 27, 2016Publication date: December 28, 2017Inventors: Zhe WANG, Christopher B. WILKERSON, Zeshan A. CHISHTI
-
Publication number: 20170371796Abstract: A facility and cache machine instruction of a computer architecture for specifying a target cache cache-level and a target cache attribute of interest for obtaining a cache attribute of one or more target caches. The requested cache attribute of the target cache(s) is saved in a register.Type: ApplicationFiled: March 7, 2016Publication date: December 28, 2017Inventors: Dan F. Greiner, Timothy J. Slegel
-
Publication number: 20170371797Abstract: Some aspects of the disclosure relate to a pre-fetch mechanism for a cache line compression system that increases RAM capacity and optimizes overflow area reads. For example, a pre-fetch mechanism may allow the memory controller to pipeline the reads from an area with fixed size slots (main compressed area) and the reads from an overflow area. The overflow area is arranged so that a cache line most likely containing the overflow data for a particular line may be calculated by a decompression engine. In this manner, the cache line decompression engine may fetch, in advance, the overflow area before finding the actual location of the overflow data.Type: ApplicationFiled: June 24, 2016Publication date: December 28, 2017Inventors: Andres Alejandro OPORTUS VALENZUELA, Nieyan GENG, Gurvinder Singh CHHABRA, Richard SENIOR, Anand JANAKIRAMAN