Patents Issued in August 6, 2015
  • Publication number: 20150220400
    Abstract: A method begins by a processing module of a dispersed storage network (DSN) identifying a data segment to be retrieved from storage units of the DSN, where the data segment is encoded into a set of encoded data slices that is divided into block sets of encoded data slices, and where each storage unit stores a block set of encoded data slices. The method continues with the processing module generating a set of read requests in accordance with retrieval information which assures that at least a decode threshold number of encoded data slices of the set are retrievable, where each request includes identity of a block set a number of encoded data slices that are to be read from a storage unit. The method continues with the processing module sending the set of read requests to the storage units and decoding received encoded data slices to recover the data segment.
    Type: Application
    Filed: November 20, 2014
    Publication date: August 6, 2015
    Applicant: CLEVERSAFE, INC.
    Inventors: Jason K. Resch, Wesley Leggette
  • Publication number: 20150220401
    Abstract: A system and method for determining when to reset a controller in response to a bus off state. The method includes determining that the controller has entered a first bus off state and immediately resetting the controller. The method further includes setting a reset timer in response to the controller being reset, determining whether the controller has entered a subsequent bus off state, and determining whether a reset time. The method immediately resets the controller in response to the subsequent bus off state if the reset time is greater than the first predetermined time interval, and resets the controller in response to the subsequent bus off state after a second predetermined time interval has elapsed if the reset time is less than the first predetermined time interval.
    Type: Application
    Filed: September 5, 2012
    Publication date: August 6, 2015
    Inventors: Shengbing Jiang, Mutasim A. Salman, Michael A. Sowa, Katrina M. Schultz
  • Publication number: 20150220402
    Abstract: Disclosed are systems, computer-readable mediums, and methods for incremental block level backup. An initial backup of a volume is created at a backup server, where creating the initial backup includes retrieving an original metadata file from a metadata server, and retrieving a copy of all data of the volume based on the original metadata file. A first incremental backup of the volume is then created at the backup server, where creating the first incremental backup includes retrieving a first metadata file, where the first metadata file was created separately from the original metadata file. A block identifier of the first metadata file is compared to a corresponding block identifier of the original metadata file to determine a difference between the first and original block identifiers, and a copy of a changed data block of the volume is retrieved based on the comparison of the first and original block identifiers.
    Type: Application
    Filed: April 13, 2015
    Publication date: August 6, 2015
    Inventors: Jared CANTWELL, Matt HOLIDAY
  • Publication number: 20150220403
    Abstract: A method of backing up a computing device comprises storing in the computing device, prior to any first backup of the computing device, a selected pre-populated Reference File that comprises one or more references to at least some of the data blocks stored in the computing device. A first backup may then be initiated. The first back may cause references to data blocks in the computing device that are unrepresented in the pre-populated Reference File to be added to the Reference File. The data blocks corresponding to the added references may then be sent to a backup server over a computer network.
    Type: Application
    Filed: April 7, 2014
    Publication date: August 6, 2015
    Applicant: Western Digital Technologies, Inc.
    Inventors: TAMIR RAM, WILLIAM H. EVANS
  • Publication number: 20150220404
    Abstract: A method of for system management, comprising initiating a workflow operating on a processor. Initiating a sub-workflow operating on the processor from the workflow. Electronically reading state data for one or more resources designated by the sub-workflow prior to performing a first logical process of the sub-workflow. Storing the state data in a non-transient data memory. Performing logical processes associated with the sub-workflow using the processor. Restoring the state data for the one or more resources if it is determined that an error has occurred.
    Type: Application
    Filed: January 31, 2014
    Publication date: August 6, 2015
    Applicant: DELL PRODUCTS L.P.
    Inventors: Kevin S. Borden, Andrew T. Miller, Michael D. Condon, Aaron Merkin, Gavin D. Scott
  • Publication number: 20150220405
    Abstract: An in-memory application has a state is associated with data (CA0, CB0, CC0) stored in a memory and accessed by the application. A first restore point of the application is determined to represent a first time point (T0) in the execution time associated with a first state at which the application accesses the data being stored in memory locations (CA0) using first addresses (S1) and first pointers (A0) which are stored in a first data structure. A first restore point identifier is assigned to the first restore point, whose value is indicative of (T0). The first restore point identifier is stored in association with (A0) and (S1) in a first entry of a second data structure. In the first data structure, the first addresses (S1) are associated to second pointers (A1) to contents of memory locations (CA1) in the memory, and redirecting writing operations.
    Type: Application
    Filed: March 25, 2015
    Publication date: August 6, 2015
    Applicant: International Business Machines Corporation
    Inventors: ALEXANDER NEEF, Martin Oberhofer, Andreas Trinks, Andreas Uhl
  • Publication number: 20150220406
    Abstract: An example method of saving and restoring a state of one or more registers for a guest includes detecting exit of a virtual machine mode of a guest running on a virtual machine. A set of registers is accessible by the guest and includes a first subset of registers and a second subset of registers. The method also includes identifying the first subset of registers. The first subset of registers includes one or more registers to be overwritten by the guest upon re-entry of the virtual machine mode. The second subset of registers is mutually exclusive from the first subset of registers. The method further includes after detecting exit of the virtual machine mode of the guest, detecting re-entry of the virtual machine mode of the guest. The method also includes restoring a saved state of the second subset of registers for the guest.
    Type: Application
    Filed: February 3, 2014
    Publication date: August 6, 2015
    Applicant: Red Hat Israel, Ltd.
    Inventors: Michael Tsirkin, Radim Krcmár
  • Publication number: 20150220407
    Abstract: A cross-host multi-hypervisor system, including a plurality of host sites, each site including at least one hypervisor, each of which includes at least one virtual server, at least one virtual disk read from and written to by the at least one virtual server, a tapping driver in communication with the at least one virtual server, which intercepts write requests made by any one of the at least one virtual server to any one of the at least one virtual disk, and a virtual data services appliance, in communication with the tapping driver, which receives the intercepted write requests from the tapping driver, and which provides data services based thereon, and a data services manager for coordinating the virtual data services appliances at the site, and a network for communicatively coupling the plurality of sites, wherein the data services managers coordinate data transfer across the plurality of sites via the network.
    Type: Application
    Filed: April 15, 2015
    Publication date: August 6, 2015
    Inventors: Ziv Kedem, Chen Yehezkel Burshan, Yair Kuszpet, Gil Levonai
  • Publication number: 20150220408
    Abstract: Systems and methods for automated failure recovery of subsystems of a management system are described. The subsystems are built and modeled as services, and their management, specifically their failure recovery, is done in a manner similar to that of services and resources managed by the management system. The management system consists of a microkernel, service managers, and management services. Each service, whether a managed service or a management service, is managed by a service manager. The service manager itself is a service and so is in turn managed by the microkernel. Both managed services and management services are monitored via in-band and out-of-band mechanisms, and the performance metrics and alerts are transported through an event system to the appropriate service manager. If a service fails, the service manager takes policy-based remedial steps including, for example, restarting the failed service.
    Type: Application
    Filed: April 13, 2015
    Publication date: August 6, 2015
    Inventor: Devendra Rajkumar Jaisinghani
  • Publication number: 20150220409
    Abstract: Per-Function Downstream Port Containment (pF-DPC) is an extension to Downstream Port Containment (DPC) in the Peripheral Component Interconnect express (PCIe) standard. Pf-DPC confines non-fatal errors to specific functions of an end-point device without disabling the link between a PCIe port and the end-point device. PCIe ports configured for pF-DPC may filter (e.g., drop) packets carrying routing identifiers (RIDs) and/or addresses assigned to a function affected by a non-fatal error, while continuing to forward packets carrying RIDs/addresses associated with remaining operable functions over the corresponding link.
    Type: Application
    Filed: February 5, 2014
    Publication date: August 6, 2015
    Applicant: FutureWei Technologies, Inc.
    Inventors: Wesley Shao, Muhui Lin
  • Publication number: 20150220410
    Abstract: A mechanism is described for achieving high memory reliability, availability, and serviceability (RAS) according to one embodiment of the invention. A method of embodiments of the invention includes detecting a permanent failure of a first memory device of a plurality of memory devices of a first channel of a memory system at a computing system, and eliminating the first failure by merging a first error-correction code (ECC) locator device of the first channel with a second ECC locator device of a second channel, wherein merging is performed at the second channel.
    Type: Application
    Filed: December 8, 2014
    Publication date: August 6, 2015
    Inventors: DABLEENA DAS, Kai Cheng, Jonathan C. Jasper
  • Publication number: 20150220411
    Abstract: A system and method for performing operating system (OS) agnostic hardware validation in a computing system are disclosed. In one example, a hardware validation test is invoked by a management processor. Further, input parameters are obtained based on the hardware validation test by the management processor. Furthermore, hardware devices are determined based on the hardware validation test and the input parameters by the management processor. In addition, a request is sent to perform the hardware validation test on the hardware devices to a system processor by the management processor. Moreover, the hardware validation test is run on the hardware devices by invoking associated hardware specific run-time drivers in a system firmware (SFW) by the system processor. Also, results of the hardware validation test are sent to the management processor by the system processor.
    Type: Application
    Filed: July 17, 2012
    Publication date: August 6, 2015
    Inventor: Suhas Shivanna
  • Publication number: 20150220412
    Abstract: In order to provide a communication apparatus that can reduce the delay in transmission of communication messages even when the bus load is heavy, a data transmission module is connected to a communication bus, over which communication messages are transferred, in such a manner that the data transmission module can transmit/receive the communication messages to/from the communication bus. A bus monitoring unit included in the data transmission module detects, for a given time period, the timings at which communication messages are flowing over the communication bus, and acquires the detection results as usage status data of said time period.
    Type: Application
    Filed: September 19, 2012
    Publication date: August 6, 2015
    Applicant: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventor: Mitsuhiro Mabuchi
  • Publication number: 20150220413
    Abstract: A storage system to communicate with a plurality of storage devices. The storage system includes a processor to execute system software that includes machine readable instructions configured to add system-level information regarding the storage system to log files stored in a reserved area of the storage device, extract the log file from each of the storage devices automatically at a predetermined interval, and transmit the log files from the storage system for analysis.
    Type: Application
    Filed: January 31, 2014
    Publication date: August 6, 2015
    Applicant: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.
    Inventors: Faris Hindi, Leo Volpe, Chochun Norman Chou
  • Publication number: 20150220414
    Abstract: Method for monitoring an apparatus connected to a communication channel wherein the method is connected to a communication channel in a vehicle. When an interval of time between two messages that are output by the apparatus is shorter than or the same as a determined period, the method determines is the apparatus is in a correct state. The apparatus can output a monitoring sign-out message to the communication channel. When a message recently output by the apparatus includes the monitoring sign-out message, it is determined that the apparatus is in a correct state.
    Type: Application
    Filed: August 28, 2013
    Publication date: August 6, 2015
    Inventor: Maurice Mücke
  • Publication number: 20150220415
    Abstract: Provided is a semiconductor device including: a first memory that stores multiple instructions; a second memory that stores multiple data items; first and second buses; a microprocessor that fetches, through the first bus, an instruction at a specified address among the multiple instructions stored in the first memory, executes the instruction, and accesses the second memory through the second bus based on a result of the execution; and a trace information output unit that acquires, when a branch instruction is generated in the microprocessor, address information of the first memory specified before branching, and outputs the information as trace information. The trace information output from the trace information output unit is written into the second memory through the second bus in a period in which the microprocessor does not access the second memory during execution of the branch instruction.
    Type: Application
    Filed: April 13, 2015
    Publication date: August 6, 2015
    Applicant: Renesas Electronics Corporation
    Inventor: Kentaro TANAKA
  • Publication number: 20150220416
    Abstract: A remote monitoring system for handheld electronic devices includes a multi-port hub, and a port visualizer connected to one of the ports of the multi-port hub. The port visualizer is configured to provide a host with mapping information correlating USB hub ports with physical storage bays within the storage system. A USB controller monitors the charge status of the ports and reports the charge status information both locally and to a HED status application. The port visualizer obtains device status information from the host and reports device status information to the HED status application. A client application obtains charge status information and device status information from the HED status application to enable remote monitoring of devices connected to the storage system.
    Type: Application
    Filed: February 3, 2015
    Publication date: August 6, 2015
    Inventors: David Johnson, Hubert Benjamin Ki Durn Ng
  • Publication number: 20150220417
    Abstract: The presently disclosed subject matter includes a device, system and method for monitoring activity at a computerized device, the device running one or more processes, at least one of the processes executing one or more system events being part of an activity. An activity of interest can be identified if system events related to the activity of interest are identified. An activity is monitored at the device using API queries to obtain descriptive information of the at least one respective system event executed as part of the activity. Using non-intrusive monitoring methods which are based on API queries enables to reduce the potential of interference of the monitoring with applications running on the monitored device.
    Type: Application
    Filed: February 3, 2014
    Publication date: August 6, 2015
    Applicant: ATERNITY INFORMATION SYSTEMS LTD.
    Inventors: Andrey DIMENT, Amir LESHMAN, Konstantin IVANOV, Yigal KARMAZIN
  • Publication number: 20150220418
    Abstract: A shredder network includes a server system, a network management system in communication with the server system for processing information, a plurality of shredders. Each of the plurality of shredders includes a cutting mechanism operable to shred material fed thereto, a controller which receives information from a plurality of sensors, and a communication module establishing communication protocol between the shredder and one of the server system and the network management system.
    Type: Application
    Filed: February 4, 2015
    Publication date: August 6, 2015
    Inventors: Paul A. Aries, Michael Stranders
  • Publication number: 20150220419
    Abstract: This disclosure relates to the analysis of a program based on source code where the source code comprises a call to a function associated with a function implementation. A processor determines, based on a summary that over-approximates the function, an assignment of an input variable and an output variable of the function call to reach a predefined state. The processor then determines, based on the implementation of the function whether the assignment of the input variable results in the assignment of the output variable. If it does not, the processor determines a narrowed summary for the function such that the narrowed summary over-approximates the function and excludes the assignment of the input variable and the output variable. Finally, the processor stores the narrowed summary on a datastore. Inlining of function code and unfolding of loops is avoided and parallel processing of multiple functions is possible.
    Type: Application
    Filed: November 13, 2014
    Publication date: August 6, 2015
    Inventors: FRANCK CASSEZ, Christian Müller
  • Publication number: 20150220420
    Abstract: Methods for performance evaluation and tuning are provided. In an embodiment, the method includes defining a performance goal for a variable in a scenario, and executing the application using the scenario, after defining the performance goal. The method also includes recording a value of the variable, e.g., during execution of the application, and determining that the value of the variable does not meet the performance goal for the variable. The method includes profiling an execution of the application in the scenario, and determining a non-critical path of the application and a critical path, based on the profiling. The method further includes identifying a bottleneck in the critical path based on the profiling, and tuning the application to address the bottleneck and generate a tuned application, with the non-critical path not being tuned. The method also includes executing the tuned application, and determining whether the tuned application meets the performance goal.
    Type: Application
    Filed: May 22, 2014
    Publication date: August 6, 2015
    Applicant: Schlumberger Technology Corporation
    Inventors: Carlos Santieri de Figueiredo Boneti, Eliana Mendes Pinto
  • Publication number: 20150220421
    Abstract: A system and method that provides runtime diagnostics information of server applications executing on application servers of a server system. At class load time, the system injects executable software code that creates and displays the diagnostics information without necessarily having to stop and restart the executing server application. In response to user applications on user devices sending request messages for content from the server application, the system injects executable code into the application server that collects the diagnostics information, produces display components, and includes the diagnostics information within the display components. The server application then includes the display components and the requested content in response messages sent to the user devices. Preferably, the diagnostics information is presented in the same display context on the user device as the requested content, such as pages within a web browser.
    Type: Application
    Filed: February 3, 2015
    Publication date: August 6, 2015
    Inventors: Toomas Römer, Jevgeni Kabanov, Anton Arhipov
  • Publication number: 20150220422
    Abstract: A method, system and program product for recording a program execution comprising recording processor context for each thread of the program, results of system calls by the program, and memory pages accessed by the program during an execution interval in a checkpoint file. Processor context includes register contents and descriptor entries in a segment descriptor table of the operating system. System calls are recorded for each program thread, tracked by an extension to the operating system kernel and include returned call parameter data. Accessed memory pages are recorded for each program process and include data, libraries and code pages. The program address space, processor context, and program threads are reconstructed from checkpoint data for replaying the program execution in a different operating system environment.
    Type: Application
    Filed: April 13, 2015
    Publication date: August 6, 2015
    Inventor: Dinesh K. Subhraveti
  • Publication number: 20150220423
    Abstract: Data is identified that represents a path of a transaction that includes a plurality of transaction fragments associated with a plurality of software components. The plurality of software components includes a first software component to communicate in the transaction with a second software component over an interface. A transaction boundary is determined between the first and second software components based at least in part on the data. A virtual model is generated to simulate at least a particular one of the plurality of software components based on the identified transaction boundary.
    Type: Application
    Filed: March 15, 2013
    Publication date: August 6, 2015
    Applicant: CA, INC.
    Inventor: CA, INC.
  • Publication number: 20150220424
    Abstract: A method to generate test double proxies for callee functions of a function under test may include generating an initial set of test double proxies with abstract test stubs for all callee functions called by the function under test. Each of the test double proxies in the initial set of test double proxies may correspond to a different one of the callee functions. The method may also include generating a first refined set of test double proxies that includes a first refined test stub instead of a first one of the abstract test stubs for a first test double proxy in the initial set of test double proxies in response to determining that refining the first one of the abstract test stubs improves a test coverage of the function under test.
    Type: Application
    Filed: January 31, 2014
    Publication date: August 6, 2015
    Applicant: FUJITSU LIMITED
    Inventor: Hiroaki YOSHIDA
  • Publication number: 20150220425
    Abstract: A method to generate a human-friendly test context in a test proxy for a function under test may include generating an initial test context of the function under test. The method may also include enhancing a current test context with a new context enhancement. The method may also include adding a hint to the current test context. The current test context may include or be derived from the initial test context.
    Type: Application
    Filed: January 31, 2014
    Publication date: August 6, 2015
    Applicant: FUJITSU LIMITED
    Inventor: Hiroaki YOSHIDA
  • Publication number: 20150220426
    Abstract: A method to perform performance tests on an application in a continuous deployment pipeline is provided herein. The method identifies code changes are two distinct builds in a performance test environment. The method obtains a baseline test result by executing a set of customized test scripts on a baseline build with a first code base. The method similarly tests the new build by executing the set of customized test scripts on the new build with a second code base to obtain a new test result. Performance values are determined by comparing the baseline test result and the new test result.
    Type: Application
    Filed: August 13, 2012
    Publication date: August 6, 2015
    Inventors: Adam Spektor, Inbar Shani, Amichai Nitsan
  • Publication number: 20150220427
    Abstract: A terminal device according to the present application includes an acceptance unit, a detection unit, and a storage unit. The acceptance unit accepts a specifying operation for specifying a piece of content related to a first application. The detection unit detects a predetermined executing operation after the acceptance unit accepts the specifying operation. Upon detection of the executing operation by the detection unit, the storage unit stores the piece of content specified by the specifying operation in a storage region used by a second application.
    Type: Application
    Filed: September 16, 2014
    Publication date: August 6, 2015
    Inventors: Koichi SAWADA, Ayaka HIRANO, Shinya KATO
  • Publication number: 20150220428
    Abstract: A lighting system according to various embodiments includes a lighting array having a plurality of luminaires and a plurality of sensors. The lighting system also includes a controller configured to operate in at least one from the group including (i) pre commissioning mode and (ii) a commissioning mode. The pre-commissioning mode matches one of the luminaires with a corresponding one of the sensors to create luminaire-sensor pairs and the commissioning mode determines a location of each of the luminaire-sensor pairs.
    Type: Application
    Filed: January 31, 2014
    Publication date: August 6, 2015
    Applicant: General Electric Company
    Inventors: Bulcsu SIMONYI, Gabor SCHMIDT, Levente KOVACS
  • Publication number: 20150220429
    Abstract: A method of distributing data in a distributed storage system includes receiving a file into non-transitory memory and dividing the received file into chunks. The chunks are data-chunks and non-data chunks. The method also includes grouping one or more of the data chunks and one or more of the non-data chunks in a group. One or more chunks of the group is capable of being reconstructed from other chunks of the group. The method also includes distributing the chunks of the group to storage devices of the distributed storage system based on a hierarchy of the distributed storage system. The hierarchy includes maintenance domains having active and inactive states, each storage device associated with a maintenance domain, the chunks of a group are distributed across multiple maintenance domains to maintain the ability to reconstruct chunks of the group when a maintenance domain is in an inactive state.
    Type: Application
    Filed: January 31, 2014
    Publication date: August 6, 2015
    Applicant: Google Inc.
    Inventors: Robert Cypher, Sean Quinlan, Steven Robert Schirripa, Lidor Carmi, Christian Eric Schrock
  • Publication number: 20150220430
    Abstract: The present invention relates to a system and method of registering and allocating a granted memory based on a topology of a granted node in a distributed integrated memory system. There is provided a granted memory providing system capable of minimizing a memory allocation time, a service access time, and a response time. In the system, when the granted memory of the granted node is registered, a topology map is generated based on a connection structure of a host channel adapter, a processor, and a memory of the granted node. The granted memory is registered and allocated based on the generated topology map so that the granted memory is efficiently configured and a memory area is registered.
    Type: Application
    Filed: June 20, 2014
    Publication date: August 6, 2015
    Applicant: Electronics and Telecommunications Research Institute
    Inventor: Young Ho KIM
  • Publication number: 20150220431
    Abstract: Example embodiments for configuring a serial non-volatile memory device for an execute-in-place mode may comprise a non-volatile configuration register to store an execute-in-place mode value that may be read at least in part in response to power being applied to the memory device.
    Type: Application
    Filed: February 9, 2015
    Publication date: August 6, 2015
    Inventors: Paolo Rolandi, Sandra Lospalluti, Raffaele Bufano, Stefano Andreoli, Tommaso Zerilli
  • Publication number: 20150220432
    Abstract: A method and apparatus for managing at least one NV memory are provided. Each NV memory comprises a plurality of physical blocks. The method includes: managing the physical blocks of each of non-volatile (NV) memory according to a block address mapping algorithm in a control module of a memory management apparatus; managing the plurality of pseudo blocks according to a block address translation rule in a management module of the memory management apparatus; and when a command being detected, determining a specific pseudo block address corresponding to a specific block logical address in the command according to the block address translation rule, then determining at least one group of specific physical block addresses corresponding to the specific pseudo block address according to the block address mapping algorithm, and then processing a specific physical block group corresponding to the group of specific physical block addresses according to the command.
    Type: Application
    Filed: April 13, 2015
    Publication date: August 6, 2015
    Inventor: Ping-Yi Hsu
  • Publication number: 20150220433
    Abstract: A method manages a flash memory for a computer system having flash chips divided into separately erasable physical memory blocks with a limited maximum erasure frequency. The memory blocks are divided into writable pages being subdivided into addressable subpages. The subpages are addressed by a computer via logical sector addresses being converted into physical subpage addresses. The flash memory has a first area containing single-level flash chips with a higher maximum erasure frequency, and a second area containing multi-level flash chips with a lower maximum erasure frequency. If write operations in the first area exceed an upper threshold for a filling level of written memory blocks, a written memory block having a low erasure counter is searched for in the first area, whose valid subpages are transferred into a memory block of the second area. The address allocations for the transferred subpages are updated.
    Type: Application
    Filed: April 29, 2014
    Publication date: August 6, 2015
    Applicant: HYPERSTONE GMBH
    Inventor: FRANZ SCHMIDBERGER
  • Publication number: 20150220434
    Abstract: A method for executing garbage collection in an electronic device is provided. The method includes executing garbage collection on a memory of the electronic device, determining a propriety of the garbage collection, and adjusting a control variable for determining an execution time-point of a next garbage collection based on the propriety of the garbage collection.
    Type: Application
    Filed: February 21, 2014
    Publication date: August 6, 2015
    Applicant: Samsung Electronics Co., Ltd.
    Inventor: Do-Kyou LEE
  • Publication number: 20150220435
    Abstract: A high-availability storage system includes a first storage system and a second storage system. The first storage system includes a first Central Processing Unit (CPU), a first physically-addressed solid state disk (SSD) and a first non-volatile memory module that is coupled to the first CPU. Similarly, the second storage system includes a second CPU and a second SSD. Upon failure of one of the first or second CPUs, or the storage system with the non-failing CPU continues to be operational and the storage system with the failed CPU is deemed inoperational and the first and second SSDs remain accessible.
    Type: Application
    Filed: April 16, 2015
    Publication date: August 6, 2015
    Inventors: Mehdi Asnaashari, Siamack Nemazie, Anilkumar Mandapuram
  • Publication number: 20150220436
    Abstract: A system and method to implement a tag structure for a cache memory that includes a multi-way, set-associative translation lookaside buffer. The tag structure may store vectors in an L1 tag array to enable an L1 tag lookup that has fewer bits per entry and consumes less power. The vectors may identify entries in a translation lookaside buffer tag array. When a virtual memory address associated with a memory access instruction hits in the translation lookaside buffer, the translation lookaside buffer may generate a vector identifying the set and the way of the translation lookaside buffer entry that matched. This vector may then be compared to a group of vectors stored in a set of the L1 tag arrays to determine whether the virtual memory address hits in the L1 cache.
    Type: Application
    Filed: March 14, 2013
    Publication date: August 6, 2015
    Applicant: Intel Corporation
    Inventors: Niranjan Cooray, Steffen Kosinski, Rami May, Doron Gershon, Jaroslaw Topp, Varun Mohandru
  • Publication number: 20150220437
    Abstract: Technologies are generally described for a cache coherence directory in multi-processor architectures. In an example, a directory in a die may receive a request for a particular block. The directory may determine a block aging threshold relating to a likelihood that data blocks, including the particular data block, are stored in one or more caches in the die. The directory may further analyze a memory to identify a particular cache indicated as storing the particular data block and identify a number of cache misses for the particular cache. The directory may identify a time when an event occurred for the particular data block and determine whether to send the request for the particular data block to the particular cache based on the aging threshold, the time of the event, and the number of cache misses.
    Type: Application
    Filed: April 15, 2015
    Publication date: August 6, 2015
    Inventor: YAN SOLIHIN
  • Publication number: 20150220438
    Abstract: Examples described herein include a computer system, implemented on a node cluster including at least a first node and a second node. The computer system monitors data access requests received by the first node. Specifically, the computer system monitors data access requests that correspond with operations to be performed on a data volume stored on the second node. The system determines that a number of the data access requests received by the first node satisfies a first threshold amount and, upon making the determination, selectively provisions a cache to store a copy of the data volume on the first node based, at least in part, on a system load of the first node.
    Type: Application
    Filed: February 4, 2014
    Publication date: August 6, 2015
    Applicant: NetApp, Inc.
    Inventors: Mardiros Chakalian, Darrell Suggs, Robert Hyer, JR.
  • Publication number: 20150220439
    Abstract: This document relates to data storage techniques. One example can buffer write commands and cause the write commands to be committed to storage in flush epoch order. Another example can maintain a persistent log of write commands that are arranged in the persistent log in flush epoch order. Both examples may provide a prefix consistent state in the event of a crash.
    Type: Application
    Filed: March 28, 2014
    Publication date: August 6, 2015
    Applicant: MICROSOFT CORPORATION
    Inventors: James W. MICKENS, Amar PHANISHAYEE, Vijaychidambaram VELAYUDHAN PILLAI
  • Publication number: 20150220440
    Abstract: A data processing system including multiple processors 6, 8, 10, 12 each with a local cache memory 14, 16, 18, 20 share a main memory 24 under control of a coherency controller 22. Store requests from a store requester which are to be serviced by data received from the main memory 24 trigger the coherency controller 22 to return exclusive rights to access the data to the store requester before the corresponding data is returned from the main memory 24. The store requester uses possession of the exclusive rights to access the data to permit further processing with an ordering constraint relative to the store request to proceed even though the store request has yet to be finished. The ordering constraint may be, for example, a fence instruction. The store requester in possession of the exclusive rights to access the data values ensures that the store request is finished and its results observed by any instruction as required by the ordering constraint it has released early.
    Type: Application
    Filed: January 31, 2014
    Publication date: August 6, 2015
    Applicant: THE REGENTS OF THE UNIVERSITY OF MICHIGAN
    Inventors: Shaizeen AGA, Abhayendra SINGH, Satish NARAYANASAMY
  • Publication number: 20150220441
    Abstract: Apparatus and methods provide associative mapping of the blocks of two or more memory arrays such that data, such as pages of data, from the good blocks of the two or more memory arrays can be read in an alternating manner for speed or can be read in parallel for providing data to relatively wide data channels. This obviates the need for processor intervention to access data and can increase the throughput of data by providing, where configured, the ability to alternate reading of data from two or more arrays. For example, while one array is loading data to a cache, the memory device can be providing data that has already been loaded to the cache.
    Type: Application
    Filed: February 6, 2015
    Publication date: August 6, 2015
    Inventor: David Eggleston
  • Publication number: 20150220442
    Abstract: Systems, methods, and software described herein facilitate a cache service that allocates shared memory in a data processing cluster based on quality of service. In one example, a method for operating a cache service includes identifying one or more jobs to be processed in a cluster environment. The method further provides determining a quality of service for the one or more jobs and allocating shared memory for the one or more jobs based on the quality of service.
    Type: Application
    Filed: September 11, 2014
    Publication date: August 6, 2015
    Inventors: Thomas A. Phelan, Michael J. Moretti, Gunaseelan Lakshminarayanan, Ramaswami Kishore
  • Publication number: 20150220443
    Abstract: Embodiments of the current invention permit a user to allocate cache memory to main memory more efficiently. The processor or a user allocates the cache memory and associates the cache memory to the main memory location, but suppresses or bypassing reading the main memory data into the cache memory. Some embodiments of the present invention permit the user to specify how many cache lines are allocated at a given time. Further, embodiments of the present invention may initialize the cache memory to a specified pattern. The cache memory may be zeroed or set to some desired pattern, such as all ones. Alternatively, a user may determine the initialization pattern through the processor.
    Type: Application
    Filed: April 15, 2015
    Publication date: August 6, 2015
    Inventors: Steven Gerard LeMire, Vuong Cao Nguyen
  • Publication number: 20150220444
    Abstract: A data storage apparatus includes a memory for data storage. The data storage apparatus further includes a data storing section, an access detecting section, and a data deleting section. The data storing section attaches storage-purpose information to data when storing the data in the memory. The storage-purpose information is setting information indicating a purpose for which the image data is stored. The access detecting section attaches access information to the data stored in the memory upon the data being accessed when the data is used. The access information is setting information indicating a purpose for which the image data is used. The data deleting section deletes the data from the memory at a specific timing when the storage-purpose information and the access information attached to the data match.
    Type: Application
    Filed: January 28, 2015
    Publication date: August 6, 2015
    Applicant: KYOCERA Document Solutions Inc.
    Inventor: Masakazu YAMAMOTO
  • Publication number: 20150220445
    Abstract: A transactional memory receives a command, where the command includes an address and a novel DAT (Do Address Translation) bit. If the DAT bit is set and if the transactional memory is enabled to do address translations and if the command is for an access (read or write) of a memory of the transactional memory, then the transactional memory performs an address translation operation on the address of the command. Parameters of the address translation are programmable and are set up before the command is received. In one configuration, certain bits of the incoming address are deleted, and other bits are shifted in bit position, and a base address is ORed in, and a padding bit is added, thereby generating the translated address. The resulting translated address is then used to access the memory of the transactional memory to carry out the command.
    Type: Application
    Filed: February 4, 2014
    Publication date: August 6, 2015
    Applicant: Netronome Systems, Inc.
    Inventors: Gavin J. Stark, Rolf Neugebauer
  • Publication number: 20150220446
    Abstract: A transactional memory receives a command, where the command includes an address and a novel GAA (Generate Alert On Action) bit. If the GAA bit is set and if the transactional memory is enabled to generate alerts and if the command is a write into a memory of the transactional memory, then the transactional memory outputs an alert in accordance with preconfigured parameters. For example, the alert may be preconfigured to carry a value or key usable by the recipient of the alert to determine the reason for the alert. The alert may be set up to include the address of the memory location in the transactional memory that was written. The transactional memory may be set up to send the alert to a predetermined destination. The outputting of the alert may be a writing of information into a predetermined destination, or may be an outputting of an interrupt signal.
    Type: Application
    Filed: February 4, 2014
    Publication date: August 6, 2015
    Applicant: Netronome Systems, Inc.
    Inventors: Gavin J. Stark, Rolf Neugebauer
  • Publication number: 20150220447
    Abstract: An information processing apparatus includes a free page storage unit and a page allocating unit. The free page storage unit divides a memory region in a memory into pages of a plurality of different page sizes and manages the divided pages, and stores management information about an initialization state corresponding to an unused memory region in the memory. The page allocating unit selects a free page of a page size according to a requested region size or a requested page size from the free page storage unit when an allocation of the unused memory region is requested, and performs an initializing process on a memory region on which the initializing process has not been performed in a memory region corresponding to the free page using management information about the selected free page.
    Type: Application
    Filed: January 6, 2015
    Publication date: August 6, 2015
    Inventor: Takayuki OKAMOTO
  • Publication number: 20150220448
    Abstract: An address compression method, an address decompression method, a compressor, and a decompressor, which can improve an address compression ratio. The address compression method includes after a compressor receives multiple operation request messages that are sent by a first processor, determining, according to an address feature formed by address information carried in all operation request messages that have a same stream number, a compression algorithm corresponding to the operation request messages that have a same stream number; and then compressing, according to the determined compression algorithm, addresses carried in the operation request messages that have a same stream number. The present invention is applicable to the computer field.
    Type: Application
    Filed: April 15, 2015
    Publication date: August 6, 2015
    Inventors: Mingyang Chen, Mingyu Chen, Zehan Cui, Yuan Ruan
  • Publication number: 20150220449
    Abstract: A Network Interface Device (NID) of a web hosting server implements multiple virtual NIDs. A virtual NID is configured by configuration information in an appropriate one of a set of smaller blocks in a high-speed memory on the NID. There is a smaller block for each virtual NID. A virtual machine on the host can configure its virtual NID by writing configuration information into a larger block in PCIe address space. Circuitry on the NID detects that the PCIe write is into address space occupied by the larger blocks. If the write is into this space, then address translation circuitry converts the PCIe address into a smaller address that maps to the appropriate one of the smaller blocks associated with the virtual NID to be configured. If the PCIe write is detected not to be an access of a larger block, then the NID does not perform the address translation.
    Type: Application
    Filed: February 4, 2014
    Publication date: August 6, 2015
    Applicant: Netronome Systems, Inc.
    Inventors: Gavin J. Stark, Rolf Neugebauer