Patents Issued in February 21, 2019
-
Publication number: 20190057017Abstract: A trace of the execution of a program is recorded. This program trace includes various information regarding each instruction of the program that is executed, such as the values or locations of any inputs to the instruction, the values or locations of any outputs by the instruction, and so forth. After execution of the program completes, an execution analysis system analyzes the program trace and can perform various different tasks. One such task that the execution analysis system can perform is automatically identifying potential concurrency problems with the program resulting from asynchronous execution of threads of the program. Another such task that the execution analysis system can perform is reordering the information in the program trace so that a linear view of the information can be displayed in accordance with the order of the code in the program rather than the order in which the asynchronous threads were executed.Type: ApplicationFiled: August 16, 2017Publication date: February 21, 2019Applicant: Microsoft Technology Licensing, LLCInventor: Jordi MOLA
-
Publication number: 20190057018Abstract: Methods, systems, and computer program products are provided for dynamically collecting information corresponding to characteristics of a binary. A user or program inputs a path corresponding to a binary. A testing framework accesses a testing configuration that specifies one or more characteristics of a binary for which data collection is enabled. The testing configuration parses the binary to collect the characteristics. Based on the collected characteristics, the testing configuration identifies additional characteristics of the binary. The testing configuration collects information corresponding to identified additional characteristics of the binary.Type: ApplicationFiled: August 18, 2017Publication date: February 21, 2019Inventor: Cleber Rodrigues Rosa
-
Publication number: 20190057019Abstract: A cluster of devices can be identified where results from executing a test by any cluster devices can be considered as being from the same device. Thus, instead of waiting for a single device to produce comparable results, multiple devices from the same cluster can simultaneously perform the test and obtain the needed set of test results more quickly. The technology can identify clusters of devices that are all similar to a primary cluster device. A device pair can be considered similar when (1) a mean each of a set of test results from each device are within a first threshold of each other, (2) a measurement of the consistency of each test result set are within a second threshold of each other, and (3) a measurement of the consistency of a combination of the test results sets is between the consistency measurements of the individual test result sets.Type: ApplicationFiled: August 21, 2017Publication date: February 21, 2019Inventors: Eun Chang Lee, Xun Li, Kumar Rangarajan, Michael McKenzie Magruder, Zheng Mi
-
Publication number: 20190057020Abstract: An acceptance control unit accepts, after simulation of a state change of a target system, a specified time which is a time in a simulation period being a target of the simulation. A reproduction unit selects a change time corresponding to the specified time from trace data and obtains a state value corresponding to the selected change time from the trace data. The reproduction unit generates a state image indicating a state corresponding to the obtained state value. A display unit displays the generated state image.Type: ApplicationFiled: April 12, 2016Publication date: February 21, 2019Applicant: MITSUBISHI ELECTRIC CORPORATIONInventors: Hirokazu SUZUKI, Takehisa MIZUGUCHI, Minoru NAKAMURA
-
Publication number: 20190057021Abstract: A system and method for remote testing of enterprise software applications (ESA) allows one or more testers to remotely access an ESA and remotely test the ESA. In at least one embodiment, the ESA resides in a testing platform that includes one more computers that are provisioned for testing. “Provisioning” a computer system (such as one or more servers) refers to preparing, configuring, and equipping the computer system to provide services to one or more users. In at least one embodiment, the computer system is provisioned to create an ESA operational environment in accordance with a virtual desktop infrastructure (VDI) template interacting with virtualization software.Type: ApplicationFiled: October 19, 2018Publication date: February 21, 2019Applicant: DevFactory FZ LLCInventor: Rahul Subramaniam
-
Publication number: 20190057022Abstract: Disclosed herein are system, method, and computer program product embodiments for adaptively self-tuning a bucket memory manager. An embodiment operates by receiving requests for memory blocks of varying memory sizes from a client. Determining a workload for the client based on the requests. Analyzing buckets in the bucket memory manager based on the workload. Adjusting parameters associated with the bucket memory manager based on the analyzing to accommodate the requests.Type: ApplicationFiled: October 22, 2018Publication date: February 21, 2019Inventor: Tony IMBIERSKI
-
Publication number: 20190057023Abstract: Implementations of the present disclosure include methods, systems, and computer-readable storage mediums for receiving a memory access latency value including a time to perform an operation with respect to the memory bank of the plurality of memory banks, receiving a set of operation percentages including an operation percentage for each of a plurality of operations performed on the memory bank, determining a probability associated with the memory access latency value using a mixture of Weibull distributions, described herein, comparing the probability to a threshold probability to provide a comparison, and selectively executing at least one action with respect to the memory bank based on the comparison.Type: ApplicationFiled: August 15, 2017Publication date: February 21, 2019Inventor: Ahmad Hassan
-
Publication number: 20190057024Abstract: A system and process for recompacting digital storage space involves continuously maintaining a first log of free storage space available from multiple storage regions of a storage system such as a RAID system, and based on the first log, maintaining a second log file including a bitmap identifying the free storage space available from a given storage chunk corresponding to the storage regions. Based on the bitmaps, distributions corresponding to the storage regions are generated, where the distributions represent the percentage of free space available from each chunk, and a corresponding weight is associated with each storage region. The storage region weights may then be sorted and stored in RAM, for use in quickly identifying a particular storage region that includes the maximum amount of free space available, for recompaction.Type: ApplicationFiled: February 2, 2018Publication date: February 21, 2019Inventors: Shailendra Tripathi, Sreekanth Garigala, Sandeep Sebe
-
Publication number: 20190057025Abstract: A memory system includes: a memory device including a plurality of memory blocks each having a plurality of pages suitable for storing data; and a controller suitable for: receiving a plurality of commands from a host; controlling the memory device to perform a plurality of command operations in response to the plurality of commands; identifying parameters for the memory blocks affected by the command operations performed to the memory blocks; selecting first memory blocks among the memory blocks according to the parameters; and controlling the memory device to swap data stored in the first memory blocks to second memory blocks among the memory blocks.Type: ApplicationFiled: March 7, 2018Publication date: February 21, 2019Inventors: Jong-Min LEE, Duk-Rae LEE
-
Publication number: 20190057026Abstract: A data storage device includes a nonvolatile memory device including a plurality of planes each of which includes a plurality of memory units; and a controller configured to determine a plane distribution of one or more first planes which include one or more first memory units, determine whether the plane distribution satisfies a predetermined condition, select a memory unit in each of one or more second planes, as a second memory unit, depending on the determination result of the satisfaction of the predetermined condition, and perform a read-access in the first memory units and the second memory units simultaneously.Type: ApplicationFiled: July 10, 2018Publication date: February 21, 2019Inventor: Jeen PARK
-
Publication number: 20190057027Abstract: In an example embodiment, a method comprises determining that a multipart upload request to upload a data object in separate object parts has been received by an object storage service; generating temporary keys for the separate object parts of the data object; storing the temporary keys in a temporary key data store; generating, based on the temporary keys, a multipart key entry for the data object, the multipart key entry comprising a multipart key that contains an object identifier identifying the data object and an inverse timestamp; and inserting the multipart key entry in a persistent key data store storing an ordered set of key entries in a position determined by the object identifier and the inverse timestamp.Type: ApplicationFiled: June 18, 2018Publication date: February 21, 2019Inventors: Carl Rene D'Halluin, Bastiaan Stougie, Koen De Keyser, Thomas Demoor
-
Publication number: 20190057028Abstract: A novel distributed data storage system is disclosed. In an example method, a first plurality of key entries is stored in a first key data store at a first location and a second plurality of key entries is stored in a second key data at a second location. A key entry in comprises a corresponding key having an object identifier, an inverse timestamp, and a source identifier. The method further replicates a set of the first key entries to the second key data store. The method further inserts each first key entry from the set of the first key entries into the second key data store based on the object identifier, the inverse timestamp, and the source identifier of the first key included in that first key entry, the first key entries and the second key entries being interwoven to form a plurality of interwoven ordered key entries.Type: ApplicationFiled: June 23, 2018Publication date: February 21, 2019Inventors: Carl Rene D'Halluin, Bastiaan Stougie, Koen De Keyser, Thomas Demoor
-
Publication number: 20190057029Abstract: Methods of operating a memory system comprising a plurality of memory devices include loading respective sets of termination information to a subset of memory devices of the plurality of memory devices, and, for each memory device of the subset of memory devices, storing its respective set of termination information to an array of non-volatile memory cells of that memory device. For each memory device of the subset of memory devices, its respective set of termination information comprises address information of the memory system and one or more termination values associated with that address information.Type: ApplicationFiled: October 22, 2018Publication date: February 21, 2019Applicant: MICRON TECHNOLOGY, INC.Inventor: Terry Grunzke
-
Publication number: 20190057030Abstract: Embodiments of the present disclosure relate to a method and a device for cache management. The method includes: in response to receiving a write request for a cache logic unit, determining whether a first cache space of a plurality of cache spaces associated with the cache logic unit is locked; in response to the first cache space being locked, obtaining a second cache space from the plurality of cache spaces, the second cache space being different from the first cache space and being in an unlocked state; and performing, in the second cache space, the write request for the cache logic unit.Type: ApplicationFiled: June 28, 2018Publication date: February 21, 2019Inventors: Lifeng Yang, Ruiyong Jia, Liam Xiongcheng Li, Hongpo Gao, Xinlei Xu
-
Publication number: 20190057031Abstract: A method for updating a memory, which comprises several blocks, wherein each of the several blocks comprises multi-level cells and is operable in an MLC-mode or in a SLC-mode, wherein each multi-level cell may store more than one bit, wherein the method includes for each block to be updated: (a) copying the content of the block to a buffer block; (b) erasing the block; (c) writing the content of the block from the buffer block and an updated content for this block to this block, utilizing the capability of the block to be operated in the MLC-mode; (d) copying the updated content of the block to the buffer block; (e) erasing the block; and (f) writing the updated content from the buffer block to the block, utilizing the capability of the block to be operated in the SLC-mode. Also, therefore is a corresponding device.Type: ApplicationFiled: August 8, 2018Publication date: February 21, 2019Inventors: Thomas Kern, Robert Allinger, Robert Strenz
-
Publication number: 20190057032Abstract: A cache coherence management method, a node controller, and a multiprocessor system that includes a first table, a second table, a node controller, and at least two nodes, where the node controller determines, in the first table according to address information of data, a first entry, where the first entry includes a first field and a second field. The first field records an occupation status of the data, the second field indicates a node that occupies the data exclusively when the first field includes an exclusive state, and the node controller determines a second entry in the second table according to the address information of the data and the second field when the first field includes a shared state, where the second entry includes a third field, and the third field indicates nodes that share the data.Type: ApplicationFiled: October 19, 2018Publication date: February 21, 2019Inventors: Tao Li, Yongbo Cheng, Chenghong He
-
Publication number: 20190057033Abstract: Provided are techniques for fast cache demotions in storage controllers with metadata. A track in a demotion structure is selected. In response to determining that the track in the demotion structure does not have invalidate metadata set, demoting the track from cache. In response to determining that the track has invalidate metadata set, the track is moved from the demotion structure to an invalidate metadata structure. One or more tasks are created to process the invalidate metadata structure, wherein each of the one or more tasks selects a different track in the invalidate metadata structure, invalidates metadata for that track, and demotes that track.Type: ApplicationFiled: August 16, 2017Publication date: February 21, 2019Inventors: Kyler A. Anderson, Kevin J. Ash, Lokesh M. Gupta
-
Publication number: 20190057034Abstract: An apparatus and method for page table management. For example, one embodiment of an apparatus comprises: a memory management circuit to perform address translations using a page directory, a base page directory address identifying a location of the page directory in a system memory; a cache to reserve a first cache line containing the base page directory address stored in a modified state; cache snoop circuitry to detect a read to the base page directory address by a processor or graphics processing unit (GPU); and locking circuitry to assert a lock signal to change the state of the first cache line to a locked state, the memory management circuit to refrain from performing a page table walk until the lock signal is de-asserted.Type: ApplicationFiled: August 17, 2017Publication date: February 21, 2019Inventor: YEN HSIANG CHEW
-
Publication number: 20190057035Abstract: Embodiments of the present disclosure provide a method of storage management, a storage system and a computer program product. The method comprises determining whether a number of I/O requests for a first page in a disk of a storage system exceeds a first threshold. The method further comprises: in response to determining that the number exceeds the first threshold, caching data in the first page to a first cache of the storage system; and storing metadata associated with the first page in a Non-Volatile Dual-In-Line Memory Module (NVDIMM) of the storage system.Type: ApplicationFiled: June 28, 2018Publication date: February 21, 2019Inventors: Jian Gao, Lifeng Yang, Xinlei Xu, Liam Xiongcheng Li
-
Publication number: 20190057036Abstract: The present disclosure is directed to systems and methods of implementing a neural network using in-memory mathematical operations performed by pipelined SRAM architecture (PISA) circuitry disposed in on-chip processor memory circuitry. A high-level compiler may be provided to compile data representative of a multi-layer neural network model and one or more neural network data inputs from a first high-level programming language to an intermediate domain-specific language (DSL). A low-level compiler may be provided to compile the representative data from the intermediate DSL to multiple instruction sets in accordance with an instruction set architecture (ISA), such that each of the multiple instruction sets corresponds to a single respective layer of the multi-layer neural network model. Each of the multiple instruction sets may be assigned to a respective SRAM array of the PISA circuitry for in-memory execution.Type: ApplicationFiled: October 15, 2018Publication date: February 21, 2019Inventors: Amrita Mathuriya, Sasikanth Manipatruni, Victor Lee, Huseyin Sumbul, Gregory Chen, Raghavan Kumar, Phil Knag, Ram Krishnamurthy, IAN YOUNG, Abhishek Sharma
-
Publication number: 20190057037Abstract: A list of a first type of tracks in a cache is generated. A list of a second type of tracks in the cache is generated, wherein I/O operations are completed relatively faster to the first type of tracks than to the second type of tracks. A determination is made as to whether to demote a track from the list of the first type of tracks or from the list of the second type of tracks.Type: ApplicationFiled: August 18, 2017Publication date: February 21, 2019Inventors: Kyler A. Anderson, Kevin J. Ash, Lokesh M. Gupta
-
Publication number: 20190057038Abstract: The present disclosure includes apparatuses and methods for logical to physical mapping. A number of embodiments include a logical to physical (L2P) update table, a L2P table cache, and a controller. The controller may be configured to cause a list of updates to be applied to an L2P table to be stored in the L2P update table.Type: ApplicationFiled: August 21, 2017Publication date: February 21, 2019Inventor: Jonathan M. Haswell
-
Publication number: 20190057039Abstract: A method for storing content implemented by a first content reader. The first content reader includes a processor, a first memory for storing content, a virtualisation layer and a material abstraction layer. The method includes, during storage of first content in a given format, generating first standalone content, following which the first standalone content is stored in the first memory. Generating the first standalone content includes creating first container in which are stored at least the first content to be stored in the given format, and a first access processing step adapted to the given format and associated with the first content to be stored, the data stored in the first container making up the first standalone content. Thus, access to the stored content is guaranteed while the content reader is capable of implementing the access processing step stored with the content in the standalone content.Type: ApplicationFiled: February 3, 2017Publication date: February 21, 2019Inventors: Francois-Gael Ottogalli, Philippe Raipin Parvedy
-
Publication number: 20190057040Abstract: The present application provides methods and systems for memory management of a kernel space and a user space. An exemplary system for memory management of the kernel space and the user space may include a first storing unit configured to store a first root page table index corresponding to the kernel space. The system may also include a second storing unit configured to store a second root page table index corresponding to the user space. The system may further include a control unit communicatively coupled to the first and second registers and configured to: translate a first virtual address to a first physical address in accordance with the first root page table index for an operating system kernel, and translate a second virtual address to a second physical address in accordance with the second root page table index for a user process.Type: ApplicationFiled: August 21, 2017Publication date: February 21, 2019Inventors: Xiaowei JIANG, Shu LI
-
Publication number: 20190057041Abstract: An address mapping method of a storage device which includes a plurality of sub-storage devices each including an over-provision area includes detecting mapping information of a received logical address from a mapping table, selecting a hash function corresponding to the received logical address depending on the mapping information, selecting any one, which is to be mapped onto the received logical address, of the plurality of sub-storage devices by using the selected hash function, and mapping the received logical address onto the over-provision area of the selected sub-storage device. The selected hash function is selected from a default hash function and a plurality of hash functions to provide a rule for selecting the any one of the plurality of sub-storage devices.Type: ApplicationFiled: March 23, 2018Publication date: February 21, 2019Inventors: Keonsoo HA, MINSEOK KO, HYUNJOO MAENG, JIHYUNG PARK
-
Publication number: 20190057042Abstract: Provided are techniques for destaging pinned retryable data in cache. A ranks scan structure is created with an indicator for each rank of multiple ranks that indicates whether pinned retryable data in a cache for that rank is destageable. A cache directory is partitioned into chunks, wherein each of the chunks includes one or more tracks from the cache. A number of tasks are determined for the scan of the cache. The number of tasks are executed to scan the cache to destage pinned retryable data that is indicated as ready to be destaged by the ranks scan structure, wherein each of the tasks selects an unprocessed chunk of the cache directory for processing until the chunks of the cache directory have been processed.Type: ApplicationFiled: August 18, 2017Publication date: February 21, 2019Inventors: Kyler A. Anderson, Kevin J. Ash, Matthew G. Borlick, Lokesh M. Gupta
-
Publication number: 20190057043Abstract: A storage system (system) includes two storage devices (first device and second device). The first device stores encrypted user data prior to being enrolled with an external key server. The system generates a device access key (DAK) and a device encryption key (DEK) used to encrypt such user data and encrypts the DEK with the DAK to generate an encrypted DEK (DEK?). The system stores DEK? in the second device and stores DAK in the first device. The system enrolls the first device with the key server and receives a secure encryption key (SEK). The system obtains DEK? and DAK, which are subsequently deleted from the first and second storage device, respectively. A new DAK? is generated utilizing SEK and a first device identifier. The DEK is encrypted utilizing DAK? to form DEK?. The system indicates DAK? is an externally derived key and saves DEK? to the second device.Type: ApplicationFiled: August 17, 2017Publication date: February 21, 2019Inventors: Zah Barzik, Ronen Gazit, Ofer Leneman, Amit Margalit
-
Publication number: 20190057044Abstract: A microcontroller configured to provide secure integrity checking of code or data stored in the microcontroller is provided. The microcontroller may include a processor, memory devices defining a microcontroller memory space, a security attribution unit defining secure memory region(s) and non-secure memory region(s) in the memory space, integrity check tables indicating storage locations of various code within the microcontroller memory space, and an integrity checking unit. The integrity checking unit may be configured to receive an integrity check request for checking the integrity of a first piece of code stored in the microcontroller memory space, access a first integrity check table that indicates a storage location of the first piece of code, determine whether the first integrity check table and first piece of code are stored in the same memory region; and determine whether to perform the requested integrity check based at least on this determination.Type: ApplicationFiled: August 1, 2018Publication date: February 21, 2019Applicant: Microchip Technology IncorporatedInventors: Laurent Le Goffic, Sylvain Garnier
-
Publication number: 20190057045Abstract: A computer system of a service provider includes a processing unit executing a thread issued by a user and a random access memory (RAM) cache disposed external to the processing unit and operatively coupled to the processing unit to store data accessed or to be accessed by the processing unit. The processing unit includes control circuitry configured to, in response to receiving an access request while the thread is being executed, determine whether the thread is allowed to access the RAM cache according to a service level agreement (SLA) level established between the service provider and the user, and when the thread is RAM cacheable, access the RAM cache.Type: ApplicationFiled: August 16, 2017Publication date: February 21, 2019Inventors: Xiaowei JIANG, Shu LI
-
Publication number: 20190057046Abstract: An electronic device is described that includes: a host processor comprising at least one input port configured to receive a plurality of data signals on a plurality of virtual channels; and a memory operably coupled to the host processor and configured to receive and store data. The host processor is configured to enable and disable individual virtual channels from the plurality of virtual channels and is configured to only store data in memory associated with enabled virtual channels, and discard data from disabled channels.Type: ApplicationFiled: May 17, 2018Publication date: February 21, 2019Inventors: Stephan Matthias HERRMANN, Gaurav GUPTA, Naveen Kumar JAIN, Shreya SINGH
-
Publication number: 20190057047Abstract: A data storage device coupled to a plurality of client devices, each being given a priority and an access quota, includes a memory device and an arbiter. The arbiter is configured to receive one or more access requests requesting to access the memory device from one or more client devices within a predetermined period of time, and arbitrate which client device gets the right to access the memory device based on the priorities and the access quotas of the corresponding client devices when there is more than one client device transmitting the access request at the same time.Type: ApplicationFiled: June 12, 2018Publication date: February 21, 2019Inventors: Steve Hengchen HSU, Po-Kai FAN, Yu-Yin CHEN
-
Publication number: 20190057048Abstract: A wireless communication method and device are provided. The wireless communication system includes a communication interface, a transmitter and a receiver. The communication interface includes a plurality of lanes. The transmitter is coupled to the communication interface. The transmitter segments a plurality of input data into a plurality of packets of the same length, and transmits the packets with packet-based transmission through the plurality of lanes. The receiver is coupled to the communication interface and receives the packets from the plurality of lanes.Type: ApplicationFiled: August 16, 2017Publication date: February 21, 2019Inventor: Chien-Hsiung CHANG
-
Publication number: 20190057049Abstract: A memory system includes a nonvolatile memory device including a plurality of planes; and a controller suitable for determining whether a first read operation for the nonvolatile memory device is a random read operation, and accessing at least one first target plane of the first read operation, according to an access merge process, depending on a determination result, wherein the controller simultaneously accesses the first target plane and at least one second target plane included in the nonvolatile memory device, according to the access merge process.Type: ApplicationFiled: December 18, 2017Publication date: February 21, 2019Inventors: Duck Hoi KOO, Soong Sun SHIN, Yong Tae KIM, Cheon Ok JEONG
-
Publication number: 20190057050Abstract: Techniques and mechanisms for performing in-memory computations with circuitry having a pipeline architecture. In an embodiment, various stages of a pipeline each include a respective input interface and a respective output interface, distinct from said input interface, to couple to different respective circuitry. These stages each further include a respective array of memory cells and circuitry to perform operations based on data stored by said array. A result of one such in-memory computation may be communicated from one pipeline stage to a respective next pipeline stage for use in further in-memory computations. Control circuitry, interconnect circuitry, configuration circuitry or other logic of the pipeline precludes operation of the pipeline as a monolithic, general-purpose memory device. In other embodiments, stages of the pipeline each provide a different respective layer of a neural network.Type: ApplicationFiled: October 15, 2018Publication date: February 21, 2019Inventors: Amrita Mathuriya, Sasikanth Manipatruni, Victor W. Lee, Abhishek Sharma, Huseyin E. Sumbul, Gregory Chen, Raghavan Kumar, Phil Knag, Ram Krishnamurthy, Ian Young
-
Publication number: 20190057051Abstract: A video device is described that includes: a host processor comprising at least one input video port configured to receive a plurality of video data signals that comprise video data and embedded data from a plurality of virtual channels in a received frame; and a memory operably coupled to the host processor and configured to receive and store video data. The host processor is configured to segregate the video data from the embedded data in the received frame and process the embedded data individually.Type: ApplicationFiled: May 17, 2018Publication date: February 21, 2019Inventors: Stephan Matthias HERRMANN, Naveen Kumar JAIN, Shivali JAIN, Shreya SINGH
-
Publication number: 20190057052Abstract: A semiconductor device according to the present invention includes a plurality of masters (100), a memory controller (400a), a bus that connects the plurality of masters (100) and the memory controller (400a), a QoS information register (610) that stores QoS information of the plurality of masters (100), a right grant number controller (602) that calculates the number of grantable access rights based on space information of a buffer (401) of the memory controller (400a), a right grant selection controller (603a) that selects the master (100) which will be granted the access right based on the QoS information of the QoS information register (610) and the number of grantable rights from the right grant number controller (602), and a request issuing controller (201a) that does not pass a request of the master (100) which has not been granted the access right from the right grant selection controller (603a).Type: ApplicationFiled: October 19, 2018Publication date: February 21, 2019Inventors: Sho YAMANAKA, Toshiyuki Hiraki, Yoshihiko Hotta, Takahiro Irita
-
Publication number: 20190057053Abstract: A direct memory access (DMA) buffer section configured to store data in a plurality of storage regions in units of DMA transfers, a buffer control section configured to output a first writing permission signal for permitting the DMA transfer on the basis of presence or absence of a free storage region, a smoothing buffer control section configured to output a second writing permission signal for permitting the DMA transfer within a predetermined period, a buffer writing control section configured to execute the DMA transfer according to the first writing permission signal and the DMA transfer according to the second writing permission signal and stored the data to the free storage region, and a buffer reading control section configured to sequentially read the data for each storage region, wherein a predetermined amount of data sequentially acquired by a plurality of DMA transfers is output as a transfer unit.Type: ApplicationFiled: October 23, 2018Publication date: February 21, 2019Applicant: OLYMPUS CORPORATIONInventors: Ryusuke Tsuchida, Akira Ueno
-
Publication number: 20190057054Abstract: A method, computer program product, and system includes a processing circuit(s) allocating a page of system memory address space to a device. The allocating includes the processing circuits(s) obtaining base address registers of the device in a bus and determining a portion of the page of the system memory address space to allocate to the base address registers. The processing circuits(s) sorts the base address registers, in a descending order, according to their alignments and adds sizes of the sorted base address registers to determine the portion of the page. The processing circuit(s) determines a remainder of the page: a difference between a size of the page and the portion of the page. The processing circuit(s) requests a virtual resource of a size equal to the remainder and allocates the page to the sorted base address registers and to the virtual resource.Type: ApplicationFiled: August 21, 2017Publication date: February 21, 2019Inventors: Bo Qun BQ FENG, Zhong LI, Xian Dong MENG, Yong Ji JX XIE
-
Publication number: 20190057055Abstract: According to the present invention, congestion in a network band is suppressed. This information processing apparatus is provided with: a command reception unit that receives commands from at least one application given to at least two device groups; a device specification unit that specifies devices belonging to the at least two device groups when the commands from the at least one application given to the at least two device groups are of a similar type; and a command transmission control unit that executes control transmission so as to prevent overlap of transmission of the commands to the specified devices.Type: ApplicationFiled: February 20, 2017Publication date: February 21, 2019Applicant: NEC CorporationInventors: Norio UCHIDA, Toru YAMADA
-
Publication number: 20190057056Abstract: A network system is directed to the locating and verifying data cable routing. The network system includes a data storage server with a switch device, and processing nodes, where each of processing node includes a baseboard management controller (BMC) and a host bus adapter (HBA). The network system also includes a data cable electrically connected to the switch device of the data storage server and the HBA of the processing node. A cable identifier is stored in the BMC of the processing node and the data storage server. The data storage server and each of the processing nodes are managed by a data resource manager configured to read the cable identifier stored in the BMC of the processing node and the data storage server.Type: ApplicationFiled: August 18, 2017Publication date: February 21, 2019Inventor: Lien-Hsun CHEN
-
Publication number: 20190057057Abstract: A system and method for enabling hot-plugging of devices in virtualized systems. A hypervisor obtains respective values representing respective quantities of a resource for a plurality of virtual root buses of a virtual machine (VM). The hypervisor determines a first set of address ranges of the resource that are allocated for one or more virtual devices attached to at least one of the plurality of virtual root buses. The hypervisor determines, in view of the first set of allocated address ranges, a second set of address ranges of the resource available for attaching one or more additional virtual devices to at least one of the plurality of virtual root buses. The hypervisor assigns to the plurality of virtual root buses non-overlapping respective address ranges of the resource within the second set.Type: ApplicationFiled: October 23, 2018Publication date: February 21, 2019Inventors: Marcel Apfelbaum, Michael Tsirkin
-
Publication number: 20190057058Abstract: A format agnostic data transfer system and methods for transferring between disparate components can include a transmitting component having a data push controller, a receiving component having a processor, and a memory connected to the processor. The data push controller can receive configuration instructions from the receiving component processor transfer said data to said memory, without requiring the use of direct memory access (DMA) at said transmitting component. The reconfigurable nature of the data push controller can allow for both fixed and variable stream data to be sent, making the system data format agnostic. The receiving component can be a processor, while the transmitting component can be a field programmable gate arrays (FPGA) or an application specific integrated circuits (ASIC).Type: ApplicationFiled: October 22, 2018Publication date: February 21, 2019Applicant: United States of America, as Represented by the Secretary of the NavyInventors: Brent L. Anderson, Justin Sellers, Lance Gorrell, Jude Seeber
-
Publication number: 20190057059Abstract: A method includes receiving, by a multi-purpose callout processor, a transaction input from an external client application. The transaction input includes a request to perform a specific functionality by a transaction processing system. The multi-purpose callout processor implements a multi-purpose application program interface between the external client application and the transaction processing system. The method also includes performing a callout based on the transaction input. The multi-purpose callout processor is configured to perform a plurality of types of callouts.Type: ApplicationFiled: August 15, 2017Publication date: February 21, 2019Inventors: Suzette M. Wendler, Jack C. Yuan
-
Publication number: 20190057060Abstract: Techniques are disclosed for reconfigurable fabric data routing. A plurality of kernels is allocated across a reconfigurable fabric comprised of a plurality of clusters, wherein the plurality of kernels includes at least a first kernel and a second kernel. The first kernel is mounted in a first set of clusters within the plurality of clusters. The second kernel is mounted in a second set of clusters within the plurality of clusters. Available routing is determined through the second set of clusters. A porosity map through the second set of clusters is calculated based on the available routing through the second set of clusters. Data is sent through the second set of clusters to the first set of clusters based on the porosity map. Data input needs are evaluated for the first kernel. The available routing is controlled with instructions in circular buffers within the second set of clusters.Type: ApplicationFiled: August 17, 2018Publication date: February 21, 2019Inventor: Christopher John Nicol
-
Publication number: 20190057061Abstract: A mechanism is described for facilitating smart spill/fill data transfers in computing environments. A method of embodiments, as described herein, includes facilitating dividing a kernel into regions including low pressure regions and high pressure regions, where the low pressure regions are associated with low use of registers hosted by a processor of a computing device, while the high pressure regions are associated with high use of the registers. The method may further include transferring of data between memory and the registers based on at least one of the low pressure regions and the high pressure regions.Type: ApplicationFiled: August 16, 2017Publication date: February 21, 2019Applicant: Intel CorporationInventors: MAREK TARGOWSKI, Konrad TRIFUNOVIC
-
Publication number: 20190057062Abstract: An information processing apparatus includes an extraction unit that extracts, if a third axis is inserted between a first axis and a second axis adjacent to each other in a deployment table, a relations diagram associated with the deployment table and an insertion unit that inserts, in the relations diagram, a third item relating to an axis item of a the third axis between a first item relating to an axis item of the first axis and a second item relating to an axis item of the second axis.Type: ApplicationFiled: March 9, 2018Publication date: February 21, 2019Applicant: FUJI XEROX Co., Ltd.Inventors: Shigehiro FURUKAWA, Tomoyuki ITO
-
Publication number: 20190057063Abstract: Aspects for submatrix operations in neural network are described herein. The aspects may include a controller unit configured to receive a submatrix instruction. The submatrix instruction may include a starting address of a submatrix of a matrix, a width of the submatrix, a height of the submatrix, and a stride that indicates a position of the submatrix relative to the matrix. The aspects may further include a computation module configured to select one or more values from the matrix as elements of the submatrix in accordance with the starting address of the matrix, the starting address of the submatrix, the width of the submatrix, the height of the submatrix, and the stride.Type: ApplicationFiled: October 22, 2018Publication date: February 21, 2019Inventors: Shaoli Liu, Xiao Zhang, Yunji Chen, Tianshi Chen
-
Publication number: 20190057064Abstract: A method for facilitating the testing of a data sample involves a computing device carrying out the following actions: displaying a data sample on a user interface; receiving, via the user interface, a selection of a test to be performed on the data sample; receiving, via the user interface, an input of a result of the test; generating, in a graph database, a vertex representing a visual indicator corresponding to the input result; and creating, in the graph database, an association between the vertex representing the visual indicator and a vertex representing a file containing the displayed data sample.Type: ApplicationFiled: August 21, 2017Publication date: February 21, 2019Inventors: John Bonk, Ryan Gilsdorf, James Michael Morse, Jason Aguilon, David Andrew Haila, Matthew Sanders, Patrick Corwin Kujawa, Robert Reed Becker, Sean Martin Kelly Burke, Stephen Bush
-
Publication number: 20190057065Abstract: A modeling device models observation data and includes a database unit that stores a mathematical model based on environment data obtained from the observation data, a statistical model based on a parameter, and region information including each point at which the observation data is obtained; a data reading unit that acquires the observation data and the environment data; and a modeling processing unit that applies the mathematical model or the statistical model. The modeling processing unit calculates a model output by applying the acquired environment data to the mathematical model at each point included in the region information. When a difference between the calculated model output and the acquired observation data is determined to be equal to or greater than a preset threshold, the modeling processing unit calculates a parameter from the acquired observation data and applies the statistical model using the calculated parameter.Type: ApplicationFiled: April 1, 2016Publication date: February 21, 2019Inventors: Yoshiyasu TAKAHASHI, Yumiko ISHIDO
-
Publication number: 20190057066Abstract: Mobile devices enabled to support resolution-independent scalable display of Internet (Web) content to allow Web pages to be scaled (zoomed) and panned for better viewing on smaller screen sizes. The mobile devices employ software-based processing of original Web content, including HTML-based content, XML, cascade style sheets, etc. to generate scalable content. The scalable content and/or data derived therefrom are then employed to enable the Web content to be rapidly rendered, zoomed, and panned. Display lists may also be employed to provide further enhancements in rendering speed. Context zooms, including tap-based zooms on columns, images, and paragraphs are also enabled.Type: ApplicationFiled: September 11, 2018Publication date: February 21, 2019Inventors: Gary B. Rohrabaugh, Scott A. Sherman