Patents Issued in February 11, 2016
-
Publication number: 20160041903Abstract: A storage device made up of multiple storage media is configured such that one such media serves as a cache for data stored on another of such media. The device includes a controller configured to manage the cache by consolidating information concerning obsolete data stored in the cache with information concerning data no longer desired to be stored in the cache, and erase segments of the cache containing one or more of the blocks of obsolete data and the blocks of data that are no longer desired to be stored in the cache to produce reclaimed segments of the cache.Type: ApplicationFiled: October 27, 2015Publication date: February 11, 2016Inventor: Umesh Maheshwari
-
Publication number: 20160041904Abstract: A technique uses file system indirection to manage solid state devices (SSDs). Based on relocation of data on the SSDs from a first SSD storage block to a second SSD storage block, a flash translation layer (FTL) driver may update a per-volume indirection file to reference the second SSD storage block and no longer reference the first SSD storage block. Based on a mismatch between the per-volume indirection file and a buffer tree, the buffer tree is updated to reference the second SSD storage block. Alternatively, the FTL driver may create and insert an entry into a mapping table, wherein the entry may reference the first SSD storage block and also reference the second SSD storage block. The buffer tree may then be updated to reference the second SSD storage block based on the new entry, and the new entry may then be deleted after the buffer tree is updated.Type: ApplicationFiled: July 29, 2015Publication date: February 11, 2016Inventor: Indranil Bhattacharya
-
Publication number: 20160041905Abstract: Methods, devices, and non-transitory process-readable storage media for compacting data within cache lines of a cache. An aspect method may include identifying, by a processor of the computing device, a base address (e.g., a physical or virtual cache address) for a first data segment, identifying a data size (e.g., based on a compression ratio) for the first data segment, obtaining a base offset based on the identified data size and the base address of the first data segment, and calculating an offset address by offsetting the base address with the obtained base offset, wherein the calculated offset address is associated with a second data segment. In some aspects, the method may include identifying a parity value for the first data segment based on the base address and obtaining the base offset by performing a lookup on a stored table using the identified data size and identified parity value.Type: ApplicationFiled: August 5, 2014Publication date: February 11, 2016Inventors: Andrew Edmund Turner, George Patsilaras, Bohuslav Rychlik
-
Publication number: 20160041906Abstract: Techniques are provided for sharding objects across different compute nodes. In one embodiment, a database server instance generates, for an object, a plurality of in-memory chunks including a first in-memory chunk and a second in-memory chunk, where each in-memory chunk includes a different portion of the object. The database server instance assigns each in-memory chunk to one of a plurality of computer nodes including the first in-memory chunk to a first compute node and a second in-memory chunk to a second local memory of a second compute node. The database server instance stores an in-memory map that indicates a memory location for each in-memory chunk. The in-memory map indicates that the first in-memory chunk is located in the first local memory of the first compute node and that the second in-memory chunk is located in the second local memory of the second compute node.Type: ApplicationFiled: October 23, 2015Publication date: February 11, 2016Inventors: Niloy Mukherjee, Amit Ganesh, Vineet Marwah
-
Publication number: 20160041907Abstract: Systems and methods for managing records stored in a storage cache are provided. A cache index is created and maintained to track where records are stored in buckets in the storage cache. The cache index maps the memory locations of the cached records to the buckets in the cache storage and can be quickly traversed by a metadata manager to determine whether a requested record can be retrieved from the cache storage. Bucket addresses stored in the cache index include a generation number of the bucket that is used to determine whether the cached record is stale. The generation number allows a bucket manager to evict buckets in the cache without having to update the bucket addresses stored in the cache index. Further, the bucket manager is tiered thus allowing efficient use of differing filter functions and even different types of memories as may be desired in a given implementation.Type: ApplicationFiled: July 27, 2015Publication date: February 11, 2016Inventors: Woon Ho Jung, Nakul Dhotre, Deepak Jain, Anthony Pang
-
Publication number: 20160041908Abstract: A method for maintaining the coherency of a store coalescing cache and a load cache is disclosed. As a part of the method, responsive to a write-back of an entry from a level one store coalescing cache to a level two cache, the entry is written into the level two cache and into the level one load cache. The writing of the entry into the level two cache and into the level one load cache is executed at the speed of access of the level two cache.Type: ApplicationFiled: October 23, 2015Publication date: February 11, 2016Inventors: Karthikeyan AVUDAIYAPPAN, Mohammad ABDALLAH
-
Publication number: 20160041909Abstract: Apparatus, computer readable medium, integrated circuit, and method of moving a plurality of data items to a first cache or a second cache are presented. The method includes receiving an indication that the first cache requested the plurality of data items. The method includes storing information indicating that the first cache requested the plurality of data items. The information may include an address for each of the plurality of data items. The method includes determining based at least on the stored information to move the plurality of data items to the second cache. The method includes moving the plurality of data items to the second cache. The method may include determining a time interval between receiving the indication that the first cache requested the plurality of data items and moving the plurality of data items to the second cache. A scratch pad memory is disclosed.Type: ApplicationFiled: August 5, 2014Publication date: February 11, 2016Applicant: ADVANCED MICRO DEVICES, INC.Inventors: JunLi Gu, Bradford M. Beckmann, Yuan Xie
-
Publication number: 20160041910Abstract: A system and method for management and processing of resource requests at cache server computing devices is provided. Cache server computing devices segment content into an initialization fragment for storage in memory and one or more remaining fragments for storage in a media having higher latency than the memory. Upon receipt of a request for the content, a cache server computing device transmits the initialization fragment from the memory, retrieves the one or more remaining fragments, and transmits the one or more remaining fragments without retaining the one or more remaining fragments in the memory for subsequent processing.Type: ApplicationFiled: October 19, 2015Publication date: February 11, 2016Inventors: David R. Richardson, Christopher L. Scofield
-
Publication number: 20160041911Abstract: In one embodiments, one or more first computing devices receive updated values for user data associated with a plurality of users; and for each of the user data for which an updated value has been received, determine one or more second systems that each have subscribed to be notified when the value of the user datum is updated and each have a pre-established relationship with the user associated with the user datum; and push notifications to the second systems indicating that the value of the user datum has been updated without providing the updated value for the user datum to the second systems.Type: ApplicationFiled: October 20, 2015Publication date: February 11, 2016Inventors: Wei Zhu, Ray C. He, Luke Jonathan Shepard
-
Publication number: 20160041912Abstract: The disclosed invention enables the operation of an MIMD type, an SIMD type, or coexistence thereof in a multiprocessor system including a plurality of CPUs and reduces power consumption for instruction fetch by CPUs operating in the SIMD type. A plurality of CPUs and a plurality of memories corresponding thereto are provided. When the CPUs fetch instruction codes of different addresses from the corresponding memories, the CPUs operate independently (operation of the MIMD type). On the other hand, when the CPUs issue requests for fetching an instruction code of a same address from the corresponding memories, that is, operate in the SIMD type, the instruction code read from one of the memories by one access is parallelly supplied to the CPUs.Type: ApplicationFiled: July 20, 2015Publication date: February 11, 2016Inventor: Masami Nakajima
-
Publication number: 20160041913Abstract: Systems and methods for supporting a plurality of load and store accesses of a cache are disclosed. Responsive to a request of a plurality of requests to access a block of a plurality of blocks of a load cache, the block of the load cache and a logically and physically paired block of a store coalescing cache are accessed in parallel. The data that is accessed from the block of the load cache is overwritten by the data that is accessed from the block of the store coalescing cache by merging on a per byte basis. Access is provided to the merged data.Type: ApplicationFiled: October 23, 2015Publication date: February 11, 2016Inventors: Karthikeyan AVUDAIYAPPAN, Mohammad ABDALLAH
-
Publication number: 20160041914Abstract: Embodiments include methods, systems, and computer readable medium directed to cache bypassing based on prefetch streams. A first cache receives a memory access request. The request references data in the memory. The data comprises non-reuse data. After a determination of a miss in the first cache, the first cache forwards the memory access request to a cache control logic. The detection of the non-reuse data instructs the cache control logic to allocate a block only in a second cache and bypass allocating a block in the first cache. The first cache is closer to the memory than the second cache.Type: ApplicationFiled: August 5, 2014Publication date: February 11, 2016Applicant: Advanced Micro Devices, Inc.Inventors: Yasuko Eckert, Gabriel Loh
-
Publication number: 20160041915Abstract: Systems and methods that substantially or fully remove a commanding server from a data path (e.g., as part of a data migration, disaster recovery, and/or the like) to improve data movement performance and make additional bandwidth available for other system processes and the like. Broadly, a network interface card (e.g., host bus adapter (HBA)) of a tape drive may be configured in both a target mode to allow the tape drive to be a recipient of control commands from a server to request and/or otherwise obtain data from one or more source tape drives, and in an initiator mode to allow the tape drive to send commands to the one or more tape drives specified in the commands received from the server to request/read data from and/or write data to such one or more tape drives.Type: ApplicationFiled: October 23, 2015Publication date: February 11, 2016Inventors: David G. Hostetter, Steven Sanders
-
Publication number: 20160041916Abstract: Systems and methods for managing records stored in a storage cache are provided. A cache index is created and maintained to track where records are stored in buckets in the storage cache. The cache index maps the memory locations of the cached records to the buckets in the cache storage and can be quickly traversed by a metadata manager to determine whether a requested record can be retrieved from the cache storage. Bucket addresses stored in the cache index include a generation number of the bucket that is used to determine whether the cached record is stale. The generation number allows a bucket manager to evict buckets in the cache without having to update the bucket addresses stored in the cache index. Further, the cache index can be expanded to accommodate very small records, such as those generated by legacy systems.Type: ApplicationFiled: August 8, 2014Publication date: February 11, 2016Inventors: Murali Natarajan Vilayannur, Woon Ho Jung, Kaustubh Sambhaji Patil, Satyam B. Vaghani, Michal Ostrowski, Poojan Kumar
-
Publication number: 20160041917Abstract: A system and method for mirroring a volatile memory to a CPIO device of a computer system is disclosed. According to one embodiment, a command buffer and a data buffer are provided to store data and a command for mirroring the data. The command specifies metadata associated with the data. The data is mirrored a non-volatile memory of the CPIO device based on the command.Type: ApplicationFiled: August 5, 2014Publication date: February 11, 2016Inventors: Bart Trojanowski, Maher Amer, Riccardo Badalone, Michael L. Takefman
-
Publication number: 20160041918Abstract: The present invention relates to a data storage system. The present invention provides a key value-based data storage system and an operation method thereof, the data storage system comprising: computing nodes, each of which includes a substrate module, a central processing unit, a memory arranged in the substrate module, and a NAND flash storage for cache storage; and a communication interface for interconnecting the computing nodes, wherein the computing nodes support key value-based data processing.Type: ApplicationFiled: March 7, 2014Publication date: February 11, 2016Inventors: Bokdeuk Jeong, Sungmin Lee
-
Publication number: 20160041919Abstract: Various embodiments of methods and systems for Selective Sub-Page Decompression (“SSPD”) seek to reduce unwanted latency in making requested data available to a processing component. To do so, SSPD embodiments may decompress a memory page in sub-page segments. Certain SSPD embodiments may decompress the sub-pages in parallel, using a plurality of available decompression engines. Certain other SSPD embodiments may decompress the sub-pages in a serial manner, using one or more available decompression engines and starting with a target sub-page that contains a requested chunk of data. In these ways, SSPD embodiments may make a requested chunk of data available to a processing component more quickly than other systems and methods known in the art.Type: ApplicationFiled: August 8, 2014Publication date: February 11, 2016Inventors: PHILIP MICHAEL HAWKES, ANAND PALANIGOUNDER
-
Publication number: 20160041920Abstract: Embodiments for managing read-only memory. A system includes a memory device including a real memory and a tracking mechanism configured to track relationships between multiple virtual memory addresses and real memory. The system further includes a processor configured to perform the below method and/or execute the below computer program product. One method includes mapping a first virtual memory address to a real memory in a memory device and mapping a second virtual memory address to the real memory.Type: ApplicationFiled: October 19, 2015Publication date: February 11, 2016Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Brian D. HATFIELD, Wenjeng KO, Lei LIU
-
Publication number: 20160041921Abstract: Embodiments of the invention are generally directed to systems, methods, and apparatuses for linear to physical address translation with support for page attributes. In some embodiments, a system receives an instruction to translate a memory pointer to a physical memory address for a memory location. The system may return the physical memory address and one or more page attributes. Other embodiments are described and claimed.Type: ApplicationFiled: October 19, 2015Publication date: February 11, 2016Applicant: INTEL CORPORATIONInventors: Ben-Zion Friedman, Jacob Doweck, Eliezer Weissmann, James B Crossland, Ohad Falik
-
Publication number: 20160041922Abstract: A processor includes translation-lookaside buffer (TLB) and a mapping module. The TLB includes a plurality of entries, wherein each entry of the plurality of entries is configured to hold an address translation and a valid bit vector, wherein each bit of the valid bit vector indicates, for a respective address translation context, the address translation is valid if set and invalid if clear. The TLB also includes an invalidation bit vector having bits corresponding to the bits of the valid bit vector of the plurality of entries, wherein a set bit of the invalidation bit vector indicates to simultaneously clear the corresponding bit of the valid bit vector of each entry of the plurality of entries. The mapping module generates the invalidation bit vector.Type: ApplicationFiled: November 26, 2014Publication date: February 11, 2016Inventors: TERRY PARKS, COLIN EDDY, VISWANATH MOHAN, JOHN D. BUNDA
-
Publication number: 20160041923Abstract: An inter-manycore communications method includes applying, by a service manager process, to a microkernel operating system for shared memory, and mapping shared memory, which is allocated by the microkernel operating system, to virtual address space of the service manager process; receiving and recording a service identifier of a system service process and a second shared memory address that corresponds to the service identifier; searching, according to a service identifier carried by a system service request, for the second shared memory address that corresponds to the service identifier carried by the system service request; and sending the service identifier carried by the system service request, a first shared memory address, and the second shared memory address that corresponds to the service identifier to a user process. According to the method, a problem that communication between a user process and a system service process needs multiple context switches can be solved.Type: ApplicationFiled: October 22, 2015Publication date: February 11, 2016Inventor: Xiaoke Wu
-
Publication number: 20160041924Abstract: A mechanism is provided for direct memory access in a storage device. Responsive to the buffered flash memory module receiving from a memory bus of a processor a memory command specifying a write operation, the mechanism initializes a first memory buffer in the buffered flash memory module. The mechanism writes to the first memory buffer based on the memory command. Responsive to the buffer being full, the mechanism deterministically maps addresses from the first memory buffer to a plurality of solid state drives in the buffered flash memory module using a modular mask based on a stripe size. The mechanism builds a plurality of input/output commands to persist contents of the first memory buffer to the plurality of solid state drives according to the deterministic mapping and writes the contents of the first memory buffer to the plurality of solid state drives in the buffered flash memory module according to the plurality of input/output commands.Type: ApplicationFiled: August 6, 2014Publication date: February 11, 2016Inventors: James S. Fields, JR., Andrew D. Walls
-
Publication number: 20160041925Abstract: The disclosure of the present invention presents a method and system for efficiently maintaining an object cache to a maximum size by number of entries, whilst providing a means of automatically removing cache entries when the cache attempts to grow beyond its maximum size. The method for choosing which entries should be removed provides for a balance between least recently used and least frequently used policies. A flush operation is invoked only when the cache size grows beyond the maximum size and removes a fixed percentage of entries in one pass.Type: ApplicationFiled: August 5, 2014Publication date: February 11, 2016Inventor: Andrew J. Coleman
-
Publication number: 20160041926Abstract: The disclosure of the present invention presents a method and system for efficiently maintaining an object cache to a maximum size by number of entries, whilst providing a means of automatically removing cache entries when the cache attempts to grow beyond its maximum size. The method for choosing which entries should be removed provides for a balance between least recently used and least frequently used policies. A flush operation is invoked only when the cache size grows beyond the maximum size and removes a fixed percentage of entries in one pass.Type: ApplicationFiled: April 2, 2015Publication date: February 11, 2016Inventor: Andrew J. Coleman
-
Publication number: 20160041927Abstract: Systems and methods for managing records stored in a storage cache are provided. A cache index is created and maintained to track where records are stored in buckets in the storage cache. The cache index maps the memory locations of the cached records to the buckets in the cache storage and can be quickly traversed by a metadata manager to determine whether a requested record can be retrieved from the cache storage. Bucket addresses stored in the cache index include a generation number of the bucket that is used to determine whether the cached record is stale. The generation number allows a bucket manager to evict buckets in the cache without having to update the bucket addresses stored in the cache index. In an alternative embodiment, non-contiguous portions of computing system working memory are used to cache data instead of a dedicated storage cache.Type: ApplicationFiled: January 29, 2015Publication date: February 11, 2016Inventors: Woon Ho Jung, Nakul Dhotre
-
Publication number: 20160041928Abstract: A system and method for addressing split modes of persistent memory are described herein. The system includes a non-volatile memory comprising regions of memory, each region comprising a range of memory address spaces. The system also includes a memory controller (MC) to control access to the non-volatile memory. The system further includes a device to track a mode of each region of memory and to define the mode of each region of memory. The mode is a functional use model.Type: ApplicationFiled: March 28, 2013Publication date: February 11, 2016Applicant: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.Inventors: Gregg B. Lesartre, Blaine D. Gaither, Dale C. Morris, Carey Huscroft, Russ W. Herrell
-
Publication number: 20160041929Abstract: Key management for and automount of encrypted files, including recovering a master vault key file from an encoded vault key file, storing the vault key file within a previously mounted crypto key management virtual drive so as to provide a secure scratch pad area for temporary storage of the master vault key file. An open and mount module may then invoke a file mounting procedure by providing the vault key file name and a path corresponding to the crypto key management virtual drive to a virtual drive mounting module. The method of passing the vault key file to the file mounting utility module may comprise passing command line arguments equal to a pathname and filename to the file mounting utility.Type: ApplicationFiled: October 20, 2015Publication date: February 11, 2016Inventor: Fred Federspiel
-
Publication number: 20160041930Abstract: A method for supporting a plurality of load accesses is disclosed. A plurality of requests to access a data cache is accessed, and in response, a tag memory is accessed that maintains a plurality of copies of tags for each entry in the data cache. Tags are identified that correspond to individual requests. The data cache is accessed based on the tags that correspond to the individual requests. A plurality of requests to access the same block of the plurality of blocks causes an access arbitration that is executed in the same clock cycle as is the access of the tag memory.Type: ApplicationFiled: October 23, 2015Publication date: February 11, 2016Inventors: Karthikeyan Avudaiyappan, Mohammad Abdallah
-
Publication number: 20160041931Abstract: An information handling system includes a tablet with a processor, and a dock with a second processor that determines that the tablet is coupled to the dock, boots the dock, receives a request for the tablet to be uncoupled from the dock, creates a migration image with state information of the dock and that identifies a process running on the dock, and sends the migration image to the tablet. The first processor receives the request, boots the tablet, receives the migration image from the second processor, loads the state information to the tablet, and launches the first process.Type: ApplicationFiled: August 11, 2014Publication date: February 11, 2016Inventors: Chitrak Gupta, Sushma Basavarajaiah
-
Publication number: 20160041932Abstract: A portable storage device can include a memory and one or more connectors for connecting to other devices. During use, a user can connect a remote device such as a smartphone, tablet, or the like to the portable storage device in order to transfer data from the remote device to the memory of the portable storage device. The portable storage device can include a rechargeable power source configured to provide the necessary electrical current for establishing communication between the remote device and the memory of the portable storage device. The portable storage device can further include one or more photovoltaic devices for generating electrical energy and recharging the rechargeable power source of the portable storage device.Type: ApplicationFiled: August 11, 2014Publication date: February 11, 2016Inventor: Toshihide Hokari
-
Publication number: 20160041933Abstract: A method of implementing a multi-threaded device driver for a computer system is disclosed. According to one embodiment, a polling device driver is partitioned into a plurality of driver threads for controlling a device of a computer system. The device has a first device state of an unscouted state and a scouted state, and a second device state of an inactive state and an active state. A driver thread of the plurality of driver threads determines that the first device state of the device state is in the unscouted state, and changes the first state of the device to the scouted state. The driver thread further determines that the second device state of the device is in the inactive state and changes the second device state of the device to the active state. The driver thread executes an operation on the device during a pre-determined time slot configured for the driver thread.Type: ApplicationFiled: August 5, 2014Publication date: February 11, 2016Inventors: Bart Trojanowski, Michael L. Takefman, Maher Amer
-
Publication number: 20160041934Abstract: A bonding, communication and control system that, via multiple digital and analog inputs and outputs provided by an on-the-go ready microcontroller, is capable of integrating the function of components required for a device to perform its tasks. Each invention unit has the minimum amount of built in hardware to support its features. First the units can bond, using multiple modes of identification recognition technology. Second, invention units can interconnect and exchange data via encrypted communication. Third, plug and play hardware can be added. Hardware can be customized, but also enables the fourth core feature: the invention can pair to a smart device, making possible full utilization of all of its hardware, software and existing infrastructure, including its ability to send data to and from a remote location. Thereby, both real time monitoring and anticipation of environment, and remote control of parameters of invention-associated devices, are possible.Type: ApplicationFiled: August 5, 2015Publication date: February 11, 2016Inventor: STAN C. PETROV
-
Publication number: 20160041935Abstract: An electronic device includes a communication unit configured to be connected to another communication unit via a first number of transmission paths, where the first number is greater than or equal to two, and a control unit configured to determine communication quality in each of the first number of the transmission paths at a time of initiating communication with the other communication unit and to select a second number of transmission paths, where the second number is less than the first number, in descending order of the communication quality from among the first number of the transmission paths, thereby causing the communication unit to perform communication by using the second number of the transmission paths, which have better communication quality.Type: ApplicationFiled: July 21, 2015Publication date: February 11, 2016Applicant: FUJITSU LIMITEDInventor: Tsukasa KINJO
-
Publication number: 20160041936Abstract: A packet transmission method includes packaging a plurality of data in the form of a payload; storing information on whether the plurality of data are packaged in a header, the payload or a CRC area including a transmission error check code of the plurality of data; combining the header, the payload, and the CRC area with each other to generate a transaction layer packet; and outputting a packet including the transaction layer packet.Type: ApplicationFiled: April 28, 2015Publication date: February 11, 2016Inventors: Eunji LEE, Junghyo WOO
-
Publication number: 20160041937Abstract: An example drive carrier in accordance with an aspect of the present disclosure may he mounted within a drive bay of a computing device. A network interface card may be coupled to the drive carrier, and the network interface card may be communicatively coupled to the computing device.Type: ApplicationFiled: March 20, 2013Publication date: February 11, 2016Inventors: Randol Dale Aldridge, Isaac Lagnado
-
Publication number: 20160041938Abstract: An information processing apparatus includes: an interface unit that communicates with another device through a plurality of physical links; a setting unit that determines a value of a setting parameter for setting a signal transmission characteristic for each of the plurality of physical links by performing a negotiation with the other device and that outputs a plurality of determined values of the setting parameter, each of the plurality of determined values corresponding to one of the plurality of physical links; and a judgment unit that judges whether each of the plurality of determined values is correct or not by judging whether or not a difference between a maximum value and a minimum value among the plurality of determined values falls within a predetermined range.Type: ApplicationFiled: August 5, 2015Publication date: February 11, 2016Applicant: FUJITSU LIMITEDInventors: TAKANORI ISHII, Kazuko Sakurai, Yohei Nuno, Tatsuya Shinozaki
-
Publication number: 20160041939Abstract: A storage device with SATA express interface is provided. The storage device comprises a connector, a first controller unit and a second controller unit. The connector comprises a first data connecting terminal, a second data connecting terminal and a control terminal. There are two detection pins of the connector defined from pins of the control terminal The first data connecting terminal defined by the first controller unit is used for transmitting data of a first storage, and the second data connecting terminal defined by the second controller unit is used for transmitting data of a second storage. Thereby, there is able to determine that the first data connecting terminal or the second data connecting terminal for transmitting data conforming to a first transfer protocol specification or a second transfer protocol specification by detecting the two detection pins of the connector.Type: ApplicationFiled: July 23, 2015Publication date: February 11, 2016Inventors: Chin-Chung KUO, Chia-Wei LI
-
Publication number: 20160041940Abstract: A device can be configured to provide isolation between conductive circuit paths and to selectively connect one of the conductive circuit paths to a shared interface. Each conductive circuit path can include driver circuitry designed to transmit signals according to a particular protocol and a corresponding signal speed. The shared interface can be, in one instance, a connector designed for connection to other devices. The other devices can be configured to communicate over the shared interface using one or more of the particular protocols provided using the different circuit paths.Type: ApplicationFiled: October 22, 2015Publication date: February 11, 2016Inventors: James Spehar, Jingsong Zhou, Madan Vemula
-
Publication number: 20160041941Abstract: Disclosed herein are two-wire communication systems and applications thereof. In some embodiments, a slave node transceiver for low latency communication may include upstream transceiver circuitry to receive a first signal transmitted over a two-wire bus from an upstream device and to provide a second signal over the two-wire bus to the upstream device; downstream transceiver circuitry to provide a third signal downstream over the two-wire bus toward a downstream device and to receive a fourth signal over the two-wire bus from the downstream device; and clock circuitry to generate a clock signal at the slave node transceiver based on a preamble of a synchronization control frame in the first signal, wherein timing of the receipt and provision of signals over the two-wire bus by the node transceiver is based on the clock signal.Type: ApplicationFiled: October 16, 2015Publication date: February 11, 2016Applicant: ANALOG DEVICES, INC.Inventors: MARTIN KESSLER, MIGUEL CHAVEZ, LEWIS F. LAHR, WILLIAM HOOPER, ROBERT ADAMS, PETER SEALEY
-
Publication number: 20160041942Abstract: Methods and systems are provided for Reducing Write I/O Latency Using Asynchronous Fibre Channel Exchange. A FCP target device may send one or more FC write control information units (IUs) to a FCP initiator device within a first FC exchange to request a transfer of data associated with a FCP write command IU previously sent to the FCP target device by the FCP initiator device within a second FC exchange. The FC write control IUs may be sent without the FCP target device first receiving from the FCP initiator device a sequence initiative of the second FC exchange, and/or may be sent within the first FC exchange concurrently with the FCP initiator device sending one or more FCP data IU sequences within the second FC exchange to the FCP target device. Thus, a full-duplex communication environment may be setup between the FCP initiator device and FCP target device.Type: ApplicationFiled: October 21, 2015Publication date: February 11, 2016Inventors: Parav Kanaiyalal Pandit, James W. Smart
-
Publication number: 20160041943Abstract: Memory circuit configuration schemes on multi-drop buses are disclosed. In aspects disclosed herein, an on-die mapping logic is provided in a memory circuit. A memory controller communicates with the on-die mapping logic over a multi-drop bus. The on-die mapping logic is configured to receive a predetermined on-die termination (ODT) value from the memory controller prior to being accessed. In response to receiving the predetermined ODT value, the memory circuit sets on-die termination to the predetermined ODT value and instructs an on-die reference signal generator to generate a predetermined reference signal associated with the predetermined ODT value. The predetermined reference signal provides an optimal reference voltage for implementing a desired equalization setting at the memory circuit, thus aiding in preserving signal integrity. Such improved signal integrity reduces errors in accessing the memory circuit, thus leading to improved efficiency and data throughput on the multi-drop bus.Type: ApplicationFiled: August 11, 2014Publication date: February 11, 2016Inventor: Timothy Mowry Hollis
-
Publication number: 20160041944Abstract: A graph display apparatus includes a display unit and a processor. The display unit includes a display screen. The processor performs following processes of: determining an expression as a graph display object according to positions on the display screen, the positions being designated by a user; generating an operation receiver for changing a numerical value of a coefficient included in the determined expression, according to an operation of the user; displaying the graph of the determined expression and the generated operation receiver on the display screen; and changing the graph displayed on the display screen, according to the operation of the user on the displayed operation receiver.Type: ApplicationFiled: July 23, 2015Publication date: February 11, 2016Applicant: CASIO COMPUTER CO., LTD.Inventor: Kosuke KAROJI
-
Publication number: 20160041945Abstract: A processor includes a core with locally-gated circuitry, a decode unit, a local power gate (LPG) coupled to the locally-gated circuitry, and an execution unit. The decode unit includes logic to decode a store broadcast instruction of a specified width. The LPG includes logic to selectively provide power to the locally-gated circuitry, activate power to a first portion of the locally-gated circuitry for execution of full cache-line memory operations, and deactivate power to a second portion of the locally-gated circuitry the locally-gated circuitry. The execution unit includes logic to execute, by the first portion of the locally-gated circuitry for execution of full cache-line memory operations, the store broadcast instruction, the store broadcast instruction to store data of the specified width to storage of the processor.Type: ApplicationFiled: August 6, 2014Publication date: February 11, 2016Inventors: Michael Mishaeli, Stanislav Shwartsman, Gal Ofir, Yulia Kurolap
-
Publication number: 20160041946Abstract: A method and computer system are provided for performing a comparison computation, e.g. for use in a check procedure for a reciprocal square root operation. The comparison computation compares a multiplication of three values with a predetermined value. The computer system performs the multiplication using multiplier logic which is configured to perform multiply operations in which two values are multiplied together. A first and second of the three values are multiplied to determine a first intermediate result, w1. The digits of w1 are separated into two portions, w1,1 and w1,2. The third of the three values is multiplied with w1,2 and the result is added into a multiplication of the third of the three values with w1,1 to thereby determine the result of multiplying the three values together. In this way the comparison is performed with high accuracy, whilst keeping the area and power consumption of the multiplier logic low.Type: ApplicationFiled: August 5, 2014Publication date: February 11, 2016Inventor: Leonard Rarick
-
Publication number: 20160041947Abstract: A method and computer system are provided for implementing a square root operation using an iterative converging approximation technique. The method includes fewer computations than conventional methods, and only includes computations which are simple to implement in hardware on a computer system, such as multiplication, addition, subtraction and shifting. Therefore, the methods described herein are adapted specifically for being performed on a computer system, e.g. in hardware, and allow the computer system to perform a square root operation with low latency and with low power consumption.Type: ApplicationFiled: August 5, 2014Publication date: February 11, 2016Inventor: Leonard Rarick
-
Publication number: 20160041948Abstract: An information handling system includes a processing system including a first sensor, and a second sensor, and a management system including an anomaly table, a learned model table entry associated with the processing system and including a learned model and a first sensor data history, and a prediction module to implement a prediction algorithm. The management system is configured to: receive first sensor data and second sensor data, determine an estimate of a first value of the first sensor data using a second value of the second sensor data, determine a residual of the first value by a comparison of the estimate to the first value, determine a significance of the residual, where the significance having a significant value is associated with a predicted anomaly, determine that an anomaly table entry has a known anomaly class for the predicted anomaly, and perform a remediation plan to resolve the predicted anomaly.Type: ApplicationFiled: August 11, 2014Publication date: February 11, 2016Inventors: Nikhil M. Vichare, Yan Ning
-
Publication number: 20160041949Abstract: In a method for dynamically highlighting repetitive text in electronic documents, obtaining one or more user preferences related to a user reading an electronic document. The method further includes determining whether the electronic document contains one or more repetitive text associations, wherein a repetitive text association is data that provides one or more indications of repetitive text segments interspersed within a document. In response to determining that the electronic document contains one or more repetitive text associations, the method further includes identifying one or more repetitive text segments in the electronic document corresponding to the one or more repetitive text associations and determining a time duration expended by the user reading an instance of the identified one or more repetitive text segments within the electronic document.Type: ApplicationFiled: August 6, 2014Publication date: February 11, 2016Inventors: Olympia Gluck, Itzhack Goldberg, Gilad Sharaby, Neil Sondhi
-
Publication number: 20160041950Abstract: Frame-shaped anchored elements are described. In one or more embodiments, anchored text elements are identified for primary text that is located in a non-rectangular frame (e.g., a circular frame, a rounded rectangle frame, and so on) and that references the anchored text elements. The anchored text elements may be footnotes or endnotes that are identified for primary text located in a non-rectangular text box, for example. Once identified, the anchored text elements may be fit within and at a bottom of the non-rectangular frame. The anchored text elements are considered to fit “within” the non-rectangular frame insofar as the anchored text elements do not extend outside the boundaries of the non-rectangular frame.Type: ApplicationFiled: August 5, 2014Publication date: February 11, 2016Inventors: Ashish Duggal, Douglas A. Waterfall, Mohit Yadav
-
Publication number: 20160041951Abstract: A corpus generation device according to an embodiment includes a web page acquisition unit, a reference word acquisition unit, an attachment unit and an output unit. The web page acquisition unit acquires a web page including description sentence data regarding a presentation target. The reference word acquisition unit acquires a reference word that is an attribute value regarding the presentation target from the web page. The attachment unit extracts a broader word belonging to a layer above the reference word acquired by the reference word acquisition unit from a storage unit that stores hierarchical relationship information indicating a hierarchical relationship between attribute values, and attaches an attribute tag corresponding to the reference word to the broader word included in the description sentence data. The output unit outputs, as corpus data, the description sentence data to which the attribute tag is attached by the attachment unit.Type: ApplicationFiled: September 30, 2013Publication date: February 11, 2016Applicant: RAKUTEN INCInventor: Keiji SHINZATO
-
Publication number: 20160041952Abstract: A computer-implemented system and method for processing messages using native data serialization/deserialization without any transformation, in a service-oriented pipeline architecture is disclosed. The method in an example embodiment that includes serializing or deserializing the request/response message directly into the format (specific on-the-wire data format or a java object) the recipient expects (either a service implementation or a service consumer or the framework), without first converting into an intermediate format. This provides an efficient mechanism for the same service implementation to be accessed by exchanging messages using different data formats.Type: ApplicationFiled: October 21, 2015Publication date: February 11, 2016Inventors: Sastry K. Malladi, Ronald Francis Murphy, Weian Deng