Patents Issued in April 30, 2020
-
Publication number: 20200133868Abstract: A processor reduces bus bandwidth consumption by employing a shared load scheme, whereby each shared load retrieves data for multiple compute units (CUs) of a processor. Each CU in a specified group monitors a bus for load accesses directed to a cache shared by the multiple CUs. In response to identifying a load access on the bus, a CU determines if the load access is a shared load access for its share group. In response to identifying a shared load access for its share group, the CU allocates an entry of a private cache associated with the CU for data responsive to the shared load access. The CU then monitors the bus for the data targeted by the shared load. In response to identifying the targeted data on the bus, the CU stores the data at the allocated entry of the private cache.Type: ApplicationFiled: October 31, 2018Publication date: April 30, 2020Inventor: Maxim V. KAZAKOV
-
Publication number: 20200133869Abstract: Techniques perform data storage. Such techniques may involve writing metadata to a first cache of a first processor, the metadata indicating allocation of a storage resource to user data. Such techniques may further involve determining an address range of the metadata in the first cache. Such techniques may further involve copying only data stored in the address range in the first cache to a second cache of a second processor. Accordingly, the data transmission volume between two processors is reduced, which helps to improve the overall performance of a storage system.Type: ApplicationFiled: September 19, 2019Publication date: April 30, 2020Inventors: Yousheng Liu, Geng Han, Jian Gao, Ruiyong Jia, Jianbin Kang
-
Publication number: 20200133870Abstract: Techniques perform cache management. Such techniques involve: obtaining a first cache page of the cache to be flushed, the first cache page being associated with a target storage block in a storage device; determining from the cache a set of target cache pages to be flushed, each of the set of target cache pages being associated with the target storage block; and writing data in the first cache page and data in each of the set of target cache pages into the target storage block simultaneously.Type: ApplicationFiled: September 27, 2019Publication date: April 30, 2020Inventors: Ming Zhang, Shuo Lv
-
Publication number: 20200133871Abstract: Techniques perform data writing. Such techniques involve: in response to receiving a first write request, searching a cache for a target address associated with the first write request; in response to missing of the target address in the cache, determining a page usage rate in the cache; and in response to determining that the page usage rate exceeds an upper threshold, performing the first write request with a first available page in the cache. The first available page is reclaimed, independent of a refresh cycle of the cache, in response to completing the performing of the first write request.Type: ApplicationFiled: October 18, 2019Publication date: April 30, 2020Inventors: Ruiyong Jia, Shaoqin Gong, Chun Ma
-
Publication number: 20200133872Abstract: A computer-implemented method, according to one embodiment, includes: identifying block addresses which are associated with a given object, and combining the block addresses to a first set in response to determining that at least one token is currently issued on one or more of the identified block addresses. A first portion of the block addresses is transitioned to a second set, where the first portion includes ones of the block addresses determined as having a token currently issued thereon. Moreover, a second portion of the block addresses is divided into equal chunks, where the second portion includes the block addresses remaining in the first set. The chunks in the first set are allocated across two or more parallelization units. Furthermore, the block addresses in the second set are divided into equal chunks, and the chunks in the second set are allocated to at least one dedicated parallelization unit.Type: ApplicationFiled: January 2, 2020Publication date: April 30, 2020Inventors: Amey Gokhale, Ranjith R. Nair, Sandeep R. Patil, Sasikanth Eda
-
Publication number: 20200133873Abstract: A data processing system includes multiple processing units all having access to a shared memory. A processing unit includes a processor core that executes memory access instructions including a load-type instruction. Execution of the load-type instruction generates a corresponding request that specifies a target address. The processing unit further includes a read-claim state machine that, responsive to receipt of the request, protects the load target address against access by any conflicting memory access request during a protection interval following servicing of the request.Type: ApplicationFiled: October 26, 2018Publication date: April 30, 2020Inventors: DEREK E. WILLIAMS, GUY L. GUTHRIE
-
Publication number: 20200133874Abstract: Disclosed in some examples are memory devices which feature customizable Single Level Cell (SLC) and Multiple Level Cell (MLC) configurations. The configuration (e.g., the size and position) of the SLC cache may have an impact on power consumption, speed, and other performance of the memory device. An operating system of an electronic device to which the memory device is installed may wish to achieve different performance of the device based upon certain conditions detectable by the operating system. In this way, the performance of the memory device can be customized by the operating system through adjustments of the performance characteristics of the SLC cache.Type: ApplicationFiled: December 31, 2019Publication date: April 30, 2020Inventors: Carla L. Christensen, Jianmin Huang, Sebastien Andre Jean, Kulachet Tanpairoj
-
Publication number: 20200133875Abstract: In response to receiving a read request for target data, an external address of the target data is obtained from the read request, which is an address unmapped to a storage system; hit information of the target data in cache of the storage system is determined based on the external address; and based on the hit information, an address from the external address and an internal address for providing the target data is determined. The internal address is determined based on the external address and a mapping relationship. Therefore, it can shorten the data access path, accelerate the responding speed for the data access request, and allow the cache to prefetch the data more efficiently.Type: ApplicationFiled: September 24, 2019Publication date: April 30, 2020Inventors: Ruiyong Jia, Jibing Dong, Baote Zhuo, Chun Ma, Jianbin Kang
-
Publication number: 20200133876Abstract: The present disclosure relates to a disaggregated computing architecture comprising: a first compute node (302) comprising an interconnect interface (310); an accelerator node (304) comprising a physical device (402); and an interconnection network (308) linking the first compute node (302) and the accelerator node (304), wherein: the first compute node (302) executes a host operating system (410) and instantiates a first virtual machine (VM) executing a guest device driver (406) for driving the physical device; one or more input registers of the physical device are accessible via a first uniform physical address range (upa_a_devctl) of the interconnection network (308); and the interconnect interface (310) of the first compute node (302) is configured to map a host physical address range (hpa_c_devctl) of the host operating system (410) to the first uniform physical address range (upaa_devctl).Type: ApplicationFiled: October 28, 2018Publication date: April 30, 2020Inventors: Maciej BIELSKI, Alvise RIGO, Michele PAOLINO, Salvatore Daniele RAHO
-
Publication number: 20200133877Abstract: A memory access system may include a first memory address translator, a second memory address translator and a mapping entry invalidator. The first memory address translator translates a first virtual address in a first protocol of a memory access request to a second virtual address in a second protocol and tracks memory access request completions. The second memory address translator is to translate the second virtual address to a physical address of a memory. The mapping entry invalidator requests invalidation of a first mapping entry of the first mapping address translator requests invalidation of a second mapping entry of the second memory address translator corresponding to the first mapping entry following invalidation of the first mapping entry and based upon the tracked memory access request completions.Type: ApplicationFiled: October 30, 2018Publication date: April 30, 2020Inventors: Shawn K. Walker, Christopher Shawn Kroeger
-
Publication number: 20200133878Abstract: A processor supports secure memory access in a virtualized computing environment by employing requestor identifiers at bus devices (such as a graphics processing unit) to identify the virtual machine associated with each memory access request. The virtualized computing environment uses the requestor identifiers to control access to different regions of system memory, ensuring that each VM accesses only those regions of memory that the VM is allowed to access. The virtualized computing environment thereby supports efficient memory access by the bus devices while ensuring that the different regions of memory are protected from unauthorized access.Type: ApplicationFiled: October 31, 2018Publication date: April 30, 2020Inventors: Anthony ASARO, Jeffrey G. CHENG, Anirudh R. ACHARYA
-
Publication number: 20200133879Abstract: According to one embodiment, when a read request received from a host includes a first identifier indicative of a first region, a memory system obtains a logical address from the received read request, obtains a physical address corresponding to the obtained logical address from a logical-to-physical address translation table which manages mapping between logical addresses and physical addresses of the first region, and reads data from the first region, based on the obtained physical address. When the received read request includes a second identifier indicative of a second region, the memory system obtains physical address information from the read request, and reads data from the second region, based on the obtained physical address information.Type: ApplicationFiled: December 23, 2019Publication date: April 30, 2020Inventors: Hideki Yoshida, Shinichi Kanno
-
Publication number: 20200133880Abstract: A processing system selectively enables and disables a result lookaside buffer (RLB) based on a hit rate tracked by a counter, thereby reducing power consumption for lookups at the result lookaside buffer during periods of low hit rates and improving the overall hit rate for the result lookaside buffer. A controller increments the counter in the event of a hit at the RLB and decrements the counter in the event of a miss at the RLB. If the value of the counter falls below a threshold value, the processing system temporarily disables the RLB for a programmable period of time. After the period of time, the processing system re-enables the RLB and resets the counter to an initial value.Type: ApplicationFiled: October 31, 2018Publication date: April 30, 2020Inventors: Pramod V. ARGADE, Daniel Nikolai PERONI
-
METHODS AND SYSTEMS FOR OPTIMIZED TRANSLATION LOOKASIDE BUFFER (TLB) LOOKUPS FOR VARIABLE PAGE SIZES
Publication number: 20200133881Abstract: A computer system includes a translation lookaside buffer (TLB) and a processor. The TLB comprises a first TLB array and a second TLB array, and stores entries comprising virtual address information and corresponding real address information. The processor is configured to receive a first virtual address for translation, and to concurrently determine if the TLB stores a physical address associated with the first virtual address based on a first portion and a second portion of the first virtual address. The first portion is associated with a first page size and the second portion is associated with a second page size (different from the first page size). The first portion is used to perform lookup in either one of the first TLB array and the second TLB array and the second portion is used for performing lookup in other one of the first TLB array and the second TLB array.Type: ApplicationFiled: October 29, 2018Publication date: April 30, 2020Inventors: David Campbell, Dwain A. Hicks -
Publication number: 20200133882Abstract: A memory system includes: a memory device storing host data provided from a host; and a memory controller managing and transferring the host data between the host and the memory device, wherein the memory controller comprises: a write buffer temporarily storing the host data to be transferred to the memory device; a buffer monitoring device checking a usage amount of the write buffer during a predetermined period; a buffer usage comparing device generating a flush control signal based on a usage amount comparison result by comparing the usage amount checked during a current period corresponding to the predetermined period with the usage amount checked during a previous period corresponding to the predetermined period; and a first flush device transferring the host data temporarily stored in the write buffer to the memory device in response to the flush control signal.Type: ApplicationFiled: May 29, 2019Publication date: April 30, 2020Inventor: Min-O Song
-
Publication number: 20200133883Abstract: Asynchronous file tracking may include a first process that adds files to a cache and that generates different instances of a tracking file to track the files as they are entered into the cache. A second process, executing on the device, asynchronously accesses one or more instances of the tracking file at a different rate than the first process generates the tracking file instances. The second process may update a record of cached files based on a set of entries from each of the different instances of the tracking file accessed by the second process. Each set of entries may identify a different set of files that are cached by the device. The second process may then purge one or more cached files that satisfy eviction criteria while the first process continues to asynchronously add files to the cache and create new instances to track the newly cached files.Type: ApplicationFiled: October 29, 2018Publication date: April 30, 2020Applicant: Verizon Digital Media Services Inc.Inventors: Harkeerat Singh Bedi, Donnevan Scott Yeager, Derek Shiell, Hayes Kim
-
Publication number: 20200133884Abstract: An apparatus is described. The apparatus includes a memory controller to interface with a memory side cache and an NVRAM system memory. The memory controller has logic circuitry to favor items cached in the memory side cache that are expected to be written to above items cached in the memory side cache that are expected to only be read from.Type: ApplicationFiled: December 19, 2019Publication date: April 30, 2020Inventors: Zeshan A. CHISHTI, Somnath PAUL, Charles AUGUSTINE, Muhammad M. KHELLAH
-
Publication number: 20200133885Abstract: Presented herein are methods and systems for adjusting code files to apply memory protection for dynamic memory regions supporting run-time dynamic allocation of memory blocks. The code file(s), comprising a plurality of routines, are created for execution by one or more processors using the dynamic memory. Adjusting the code file(s) comprises analyzing the code file(s) to identify exploitation vulnerable routine(s) and adding a memory integrity code segment configured to detect, upon execution completion of each vulnerable routine, a write operation exceeding from a memory space of one or more of a subset of most recently allocated blocks allocated in the dynamic memory to a memory space of an adjacent block using marker(s) inserted in the dynamic memory in the boundary(s) of each of the subset's blocks. In runtime, in case the write operation is detected, the memory integrity code segment causes the processor(s) to initiate one or more predefined actions.Type: ApplicationFiled: October 2, 2019Publication date: April 30, 2020Applicant: Sternum Ltd.Inventors: Natali TSHOUVA, Lian Granot
-
Publication number: 20200133886Abstract: A method for efficient name coding in a storage system is provided. The method includes identifying common prefixes, common suffixes, and midsections of a plurality of strings in the storage system, and writing the common prefixes, midsections and common suffixes to a string table in the storage system. The method includes encoding each string of the plurality of strings as to position in the string table of prefix, midsection and suffix of the string, and writing the encoding of each string to memory in the storage system for the plurality of strings, in the storage system.Type: ApplicationFiled: December 23, 2019Publication date: April 30, 2020Inventors: Robert Lee, Cary A. Sandvig
-
Publication number: 20200133887Abstract: An apparatus including non-volatile memory to store a forensic key and data, the data received from a host computing system. A processing device is coupled to the non-volatile memory and is to: allow writing the data, by the host computing system, to a region of the non-volatile memory; in response to a lock signal received from the host computing system, assert a lock on the region of the non-volatile memory, the lock to cause a restriction on access to the region of the non-volatile memory by an external device; and provide unrestricted access, by the external device, to the region of the non-volatile memory in response to verification of the forensic key received from the external device.Type: ApplicationFiled: March 19, 2019Publication date: April 30, 2020Applicant: Cypress Semiconductor CorporationInventors: Avi Avanindra, Stephan Rosner, Cliff Zitlaw
-
Publication number: 20200133888Abstract: Method and apparatus for handling page protection faults in combination particularly with the dynamic conversion of binary code executable by a one computing platform into binary code executed instead by another computing platform. In one exemplary aspect, a page protection fault handling unit is used to detect memory accesses, to check page protection information relevant to the detected access by examining the contents of a page descriptor store, and to selectively allow the access or pass on page protection fault information in accordance with the page protection information.Type: ApplicationFiled: December 16, 2019Publication date: April 30, 2020Inventors: Simon Murray, Geraint M. North
-
Publication number: 20200133889Abstract: Method and apparatus for handling page protection faults in combination particularly with the dynamic conversion of binary code executable by a one computing platform into binary code executed instead by another computing platform. In one exemplary aspect, a page protection fault handling unit is used to detect memory accesses, to check page protection information relevant to the detected access by examining the contents of a page descriptor store, and to selectively allow the access or pass on page protection fault information in accordance with the page protection information.Type: ApplicationFiled: December 16, 2019Publication date: April 30, 2020Inventors: Simon Murray, Geraint M. North
-
Publication number: 20200133890Abstract: A control arrangement for a coffee machine is provided and comprises a central unit having a main control unit and a plurality of peripheral units/components. Each peripheral unit/component is connected to the central unit by means of a “smart” connector, which is coded and which can provide information relating to the unit/component connected thereto to the main control unit. In order to allow information to be transferred, the central unit comprises a master communication device, each peripheral unit/component is provided with a slave communication device, and a communication line is provided for connecting the master communication device to the slave communication devices. The transferred information is unambiguously associated to the unit/component and may comprise counters, historical information, performance data and the like.Type: ApplicationFiled: July 10, 2018Publication date: April 30, 2020Inventors: Markus Zehnder, Benedict Ammann, Nuria Poblet Casanovas
-
Publication number: 20200133891Abstract: In some embodiments a transceiver is configured to wirelessly transfer data between a host computing device and one or more peripheral devices over a communication path using a communication data construct comprising a packet structure arranged in a repetitive communication structure. The repetitive communication structure can include a transmit time window within which the host transmits data to the one or more connected peripheral devices and a receive time window within which the host receives data from the one or more connected peripheral devices. A duration of the receive time window is set based on a predetermined communication report rate between the host computing device and the one or more connected peripheral devices. A new peripheral device is added as a connected peripheral device when the new peripheral device transmits a request to the host to be added as a connected peripheral device and the receive time window has time available to add the new peripheral device.Type: ApplicationFiled: May 22, 2019Publication date: April 30, 2020Inventors: Philippe Chazot, Jiri Holzbecher, Frederic Fortin, Fabrice Sauterel
-
Publication number: 20200133892Abstract: Techniques for emulating a configuration space by a peripheral device may include receiving a access request, determining that the access request is for an emulated configuration space of the peripheral device, and retrieving an emulated configuration from an emulated configuration space. The access request can then be serviced by using the emulated configuration.Type: ApplicationFiled: December 26, 2019Publication date: April 30, 2020Inventors: Nafea Bshara, Adi Habusha, Guy Nakibly, Georgy Machulsky
-
Publication number: 20200133893Abstract: In one embodiment, a system includes a bus interface including a first processor, an indirect address storage storing a number of indirect addresses, and a direct address storage storing a number of direct addresses. The system also includes a number of devices connected to the bus interface and configured to analyze data. Each device of the number of devices includes a state machine engine. The bus interface is configured to receive a command from a second processor and to transmit an address for loading into the state machine engine of at least one device of the number of devices. The address includes a first address from the number of indirect addresses or a second address from the number of direct addresses.Type: ApplicationFiled: December 24, 2019Publication date: April 30, 2020Inventors: Debra Bell, Paul Glendenning, David R. Brown, Harold B Noyes
-
Publication number: 20200133894Abstract: A tray for an avionics bay, comprising a body and a recording device rigidly connected to each other in order to reduce the space requirement of acquisition systems on board an aircraft and dedicated to the prediction of failures. The recording device comprises a first input/output port to be connected to the avionics bay, a second input/output port to be connected to an item of electrical equipment, a data bus for routing signals between the first and the second input/output port, a collection member configured for acquiring at least some of the signals routed by the data bus between the first and the second input/output port, and a memory configured to store the signals acquired by the collection member.Type: ApplicationFiled: October 23, 2019Publication date: April 30, 2020Inventors: Xavier GRANIER, Alain LAGARRIGUE, David CUMER
-
Publication number: 20200133895Abstract: Multiple virtual host ports corresponding to a same physical host port may be determined by or on behalf of a storage system, for example, in response to logging the one or more virtual host ports into the storage system. For one or more virtual host ports, it may be determined whether the virtual host port is connected to a same fabric port as another virtual host port, where a fabric port is a port of a fabric configured to connect to a virtual host port. If two virtual host ports are determined to be connected to a same fabric port, it may be concluded that the two virtual host ports correspond to (e.g., share) a same physical host port. One or more actions may be taken on a storage network based at least in part on a determination that two virtual host ports are sharing a same physical host port.Type: ApplicationFiled: October 31, 2018Publication date: April 30, 2020Applicant: EMC IP Holding Company LLCInventors: Owen Crowley, Erik P. Smith, Scott Rowlands, Arieh Don
-
Publication number: 20200133896Abstract: An information handling system includes a host configured to write a non-volatile memory express (NVMe) command on a memory submission queue slot. The NVMe command includes a pre-fetch command and a non-completion command. A controller uses the pre-fetch command to monitor read operations, and to place on hold an execution of the monitored read operations and an issuance of an interrupt in response to the non-completion command.Type: ApplicationFiled: October 25, 2018Publication date: April 30, 2020Inventors: Kevin T. Marks, Chandrasekhar Nelogal
-
Publication number: 20200133897Abstract: An indication of a capacity of a CMB elasticity buffer and an indication of a throughput of one or more memory components associated with the CMB elasticity buffer can be received. An amount of time for data at the CMB elasticity buffer to be transmitted to one or more memory components can be determined based on the capacity of the CMB elasticity buffer and the throughput of the one or more memory components. Write data can be transmitted from a host system to the CMB elasticity buffer based on the determined amount of time for data at the CMB elasticity buffer to be transmitted to the one or more memory components.Type: ApplicationFiled: December 31, 2018Publication date: April 30, 2020Inventors: John Maroney, Paul Suhler, Lyle Adams, David Springberg
-
Publication number: 20200133898Abstract: The present disclosure describes apparatuses and methods for artificial intelligence-enabled management of storage media. In some aspects, a media access manager of a storage media system receives, from a host system, host input/output commands (I/Os) for access to storage media of the storage media system. The media access manager provides information describing the host I/Os to an artificial intelligence engine and receives, from the artificial intelligence engine, a prediction of host system behavior with respect to subsequent access of the storage media. The media access manager then schedules, based on the prediction of host system behavior, the host I/Os for access to the storage media of the storage system. By so doing, the host I/Os may be scheduled to optimize host system access of the storage media, such as to avoid conflict with internal I/Os of the storage system or preempt various thresholds based on upcoming idle time.Type: ApplicationFiled: October 25, 2019Publication date: April 30, 2020Applicant: Marvell World Trade Ltd.Inventors: Christophe Therene, Nedeljko Varnica, Phong Sy Nguyen
-
Publication number: 20200133899Abstract: A storage circuit includes a buffer coupled between the storage controller and the nonvolatile memory devices. The circuit includes one or more groups of nonvolatile memory (NVM) devices, a storage controller to control access to the NVM device, and the buffer. The buffer is coupled between the storage controller and the NVM devices. The buffer is to re-drive signals on a bus between the NVM devices and the storage controller, including synchronizing the signals to a clock signal for the signals. The circuit can include a data buffer, a command buffer, or both.Type: ApplicationFiled: October 25, 2019Publication date: April 30, 2020Inventors: Emily P. CHUNG, Frank T. HADY, George VERGIS
-
Publication number: 20200133900Abstract: A computer-implemented method includes setting a respective flag in a first buffer of a hardware accelerator. The first buffer includes the respective flag of the first buffer, and a second buffer of the hardware accelerator includes a respective flag of the second buffer. A hardware state of the hardware accelerator is maintained in the first buffer, based on the respective flag of the first buffer being set. A first request directed to the hardware accelerator is received. It is determined that that the first buffer has the respective flag set. The first request is passed to the hardware accelerator, where passing the first request includes passing to the hardware accelerator a pointer to the first buffer, based on the first buffer having the respective flag set.Type: ApplicationFiled: July 24, 2019Publication date: April 30, 2020Inventors: Michael G. Jutt, Anthony T. Sofia
-
Publication number: 20200133901Abstract: To perform communication more definitely and efficiently. In a case of transferring a communication initiative in accordance with a request by a secondary master, a master determines whether or not the secondary master that has performed the request has a group management capability. Then, when it is determined that the secondary master has no group management capability, the master instructs all communication devices connected to a bus to reset a group address, and when it is determined that the secondary master has the group management capability, the master transfers the communication initiative in a state in which the group address is set. The present technology is, for example, applicable to a bus IF.Type: ApplicationFiled: May 25, 2018Publication date: April 30, 2020Inventors: Naohiro Koshisaka, Hiroo Takahashi
-
Publication number: 20200133902Abstract: Systems and method include one or more die coupled to an interposer. The interposer includes interconnection circuitry configured to electrically connect the one or more die together via the interposer. The interposer also includes translation circuitry configured to translate communications as they pass through the interposer. For instance, in the interposer, the translation circuitry translates communications, in the interposer, from a first protocol of a first die of the one or more die to a second protocol of a second die of the one or more die.Type: ApplicationFiled: December 23, 2019Publication date: April 30, 2020Inventors: Lai Guan Tang, Ankireddy Nalamalpu, Dheeraj Subbareddy, Chee Hak Teh, MD Altaf Hossain
-
Publication number: 20200133903Abstract: Systems, apparatus and methods are provided to combine multiple channels in a multi-channel memory controller to save area and reduce power and cost. An apparatus may comprise a first memory controller configured to access a first channel using a first protocol, a second memory controller configured to access a second channel using a second protocol that is different from the first protocol and a physical interface coupled to the first memory controller and a second memory controller. The physical interface may comprise a set of pins for an address and command bus shared by the first memory controller and the second memory controller for the first memory controller and the second memory controller to send address or command to respective channels by time division multiplexing.Type: ApplicationFiled: October 24, 2018Publication date: April 30, 2020Inventors: Shawn Chen, Wei Jiang, Lin Chen
-
Publication number: 20200133904Abstract: To perform communication more definitely and efficiently. Communication is performed by a master that is a communication device having a communication initiative and a slave that is a communication device that performs communication under control of the master. The master assigns a group address to an arbitrary slave of a plurality of slaves joining in a bus setting a plurality of arbitrary slaves to one group and setting the group to a destination, and when it is confirmed that at least one or more slaves exit from the bus of the slaves to which the group address is assigned, the group address assigned to the remaining slaves is reset. The present technology is, for example, applicable to a bus IF.Type: ApplicationFiled: June 7, 2018Publication date: April 30, 2020Inventors: Hiroo Takahashi, Naohiro Koshisaka
-
Publication number: 20200133905Abstract: A memory request management system may include a memory device and a memory controller. The memory controller may include a read queue, a write queue, an arbitration circuit, a read credit allocation circuit, and a write credit allocation circuit. The read queue and write queue may store corresponding requests from request streams. The arbitration circuit may send requests from the read queue and write queue to the memory device based on locations of addresses indicated by the requests. The read credit allocation circuit may send an indication of an available read credit to a request stream in response to a read request from the request stream being sent from the read queue to the memory device. The write credit allocation circuit may send an indication of an available write credit to a request stream in response to a write request from the request stream being stored at the write queue.Type: ApplicationFiled: October 7, 2019Publication date: April 30, 2020Inventors: Gregory S. Mathews, Lakshmi Narasimha Murthy Nukala, Thejasvi Magudilu Vijayaraj, Sukulpa Biswas
-
Publication number: 20200133906Abstract: A peripheral module of a programmable controller and method for operating the peripheral module, wherein in a calibration mode a base voltage value is supplied by the peripheral module to a terminal via a switching device, the supply potential is changed at a start time by the peripheral module to the modified value and a response time at which the expected change occurs is acquired, and the valid time interval is ascertained by the peripheral module utilizing the start time and the response time.Type: ApplicationFiled: October 25, 2019Publication date: April 30, 2020Inventor: Sevan Haritounian
-
Publication number: 20200133907Abstract: A computing apparatus including a printed circuit board (PCB) including a first central processing unit (CPU) socket and additional CPU socket(s); a CPU coupled to the first CPU socket; a base interposer coupled to the additional CPU socket(s); and one or more devices connected to the base interposer, wherein the base interposer provides a connection between the CPU and the one or more devices.Type: ApplicationFiled: October 31, 2018Publication date: April 30, 2020Inventors: John R. Stuewe, Walt R. Carver, Stephen P. Rousset, Douglas Simon Haunsperger
-
Publication number: 20200133908Abstract: The present disclosure provides a Type-C interface controlling circuit, a controlling method, and a mobile terminal, wherein the Type-C interface controlling circuit includes: a Type-C interface, a first transmission module, a second transmission module, a switching module, and a detection module. The first end of the detection module is connected to the Type-C interface for detecting a connection state of the Type-C interface, and the second end of the detection module is connected to the switching module, and the detection module controls a connection relationship between the first end of the switching module and the second end of the switching module according to the connection state.Type: ApplicationFiled: December 20, 2017Publication date: April 30, 2020Applicant: Vivo Mobile Communication Co., Ltd.Inventor: Quanxi YIN
-
Publication number: 20200133909Abstract: Examples described herein relate to configuring a target network interface to recognize packets that are to be written directly from the network interface to multiple memory destinations. A packet can include an identifier that a portion of the packet is to be written to multiple memory devices at specific addresses. The packet is validated to determine if the target network interface is permitted to directly copy the portion of the packet to memory of the target. The target network interface can perform a direct copy to multiple memory locations of a portion of the packet.Type: ApplicationFiled: December 24, 2019Publication date: April 30, 2020Inventors: Mark Sean HEFTY, Arlin R. DAVIS
-
Publication number: 20200133910Abstract: Systems, methods, and apparatus for improving bus latency are described. A data communication apparatus has an interface circuit adapted to couple the apparatus to a first serial bus, a clock source configured to provide a clock signal and a trigger handler. The interface circuit may be configured to receive trigger configuration information in a first transaction conducted over a serial bus, and receive a trigger actuation command from a bus master coupled to the serial bus. The trigger handler may be configured to delay a trigger actuation signal for a delay duration defined by the trigger configuration information, and provide the trigger actuation signal after the delay duration has expired. The trigger actuation signal may be generated in response to the trigger actuation command.Type: ApplicationFiled: October 2, 2019Publication date: April 30, 2020Inventors: Reza RODD, Scott DAVENPORT, Umesh SRIKANTIAH, ZhenQi CHEN
-
Publication number: 20200133911Abstract: A memory log retrieval and provisioning system includes a server device that is coupled to a support system via a network. The server device includes a memory device having at least one memory log. A memory log retrieval and provisioning subsystem is coupled to the memory device, and determines that a memory log retrieval event has occurred in the server device. In response to determining that the memory log retrieval event has occurred, the memory log retrieval and provisioning subsystem automatically retrieves the at least one memory log from the memory device without receiving user instructions subsequent to detecting the memory log retrieval event. The memory log retrieval and provisioning subsystem then automatically transmits the at least one memory log through the network to the support system without receiving user instructions subsequent to automatically retrieving the at least one memory log.Type: ApplicationFiled: October 30, 2018Publication date: April 30, 2020Inventors: Sanjay Rao, Divya Vijayvargiya
-
Publication number: 20200133912Abstract: Embodiments provide a proxy between device management messaging protocols that are used to manage devices that are I2C bus endpoints coupled to a remote access controller. A map is generated of the detected I2C bus endpoints. Mapped I2C bus endpoints that support PLDM (Platform Level Data Model) messaging are identified. Next, the mapped I2C bus endpoints that do not correspond to an identified PLDM endpoint are presumed to be IPMI (Intelligent Platform Management Interface) endpoints and are mapped accordingly. A virtual PLDM endpoint for each of the presumed IPMI I2C bus endpoints. A remote access controller is configured for use of PLDM messaging with the virtual PLDM endpoints such that these PLDM messages are translated by the proxy to equivalent IPMI commands and transmitted to the IPMI endpoints. The proxy similarly converts IPMI messages from the IPMI endpoints to equivalent PLDM messages and provided to the remote access controller via the virtual PLDM endpoint.Type: ApplicationFiled: October 24, 2018Publication date: April 30, 2020Applicant: Dell Products, L.P.Inventors: Chitrak Gupta, Rama Rao Bisa, Rajeshkumar Ichchhubhai Patel
-
Publication number: 20200133913Abstract: A computer system includes a processor and a memory. The processor is located on a first circuit board having a first connector. The memory is located on a second circuit board having a second connector. The first circuit board and the second board are physically separated from each other but connect to each other through the connector. The processor and the memory are communicated to each other based on a differential signaling scheme.Type: ApplicationFiled: October 26, 2018Publication date: April 30, 2020Inventors: VIVEK JOSHI, CHIH-CHIEH CHANG, CHANG-HSIN GENG
-
Publication number: 20200133914Abstract: A method of operating a system comprising multiple processor tiles divided into a plurality of domains wherein within each domain the tiles are connected to one another via a respective instance of a time-deterministic interconnect and between domains the tiles are connected to one another via a non-time-deterministic interconnect. The method comprises: performing a compute stage, then performing a respective internal barrier synchronization within each domain, then performing an internal exchange phase within each domain, then performing an external barrier synchronization to synchronize between different domains, then performing an external exchange phase between the domains.Type: ApplicationFiled: December 23, 2019Publication date: April 30, 2020Inventors: Daniel John Pelham Wilkinson, Stephen Felix, Richard Luke Southwell Osborne, Simon Christian Knowles, Alan Graham Alexander, Ian James Quinn
-
Publication number: 20200133915Abstract: A computer implemented method, system and media are provided to convert relational database files hosted on a client server to operating database files. The operating database files are transferred using FTP protocol to a remote archival server. A relational database is created from the transferred operating database files on the remote archival server.Type: ApplicationFiled: December 26, 2019Publication date: April 30, 2020Inventor: Toby A. Schiel
-
Publication number: 20200133916Abstract: Embodiments of the present disclosure provide a method, device and computer readable medium for accessing a file. The method described herein comprises: receiving, in a virtual file system on a client, a request for opening a file in the virtual file system from an application, the request comprising a path for the file; determining whether the file has been opened successfully at the client; in response to determining that the file fails to be opened at the client, searching a first cache of the virtual file system for the path, the first cache being configured to store paths for files that fail to be opened at the client; and in response to success in finding the path in the first cache, returning an indication of failure in opening the file to the application.Type: ApplicationFiled: February 27, 2019Publication date: April 30, 2020Inventors: Lanjun Liao, Qingxiao Zheng, Yi Wang
-
Publication number: 20200133917Abstract: Embodiments of the present disclosure provide a method, a device and a computer program product for managing data replication. According to example implementations of the present disclosure, a replication policy model associated with data replication of a source device can be obtained, which is determined based on historical status information of the source device and a historical replication policy corresponding to the historical status information; current status information of the source device is determined, wherein the current status information indicates status information associated with pending data replication of the source device; and a target replication policy is determined based on the replication policy model and the current status information, which indicates a replication policy to be applied for performing the pending data replication.Type: ApplicationFiled: February 27, 2019Publication date: April 30, 2020Inventors: Eason Jiang, Felix Peng, Eddie Dai, Fubin Zhang, Beryl Wang