Patents Issued in July 25, 2019
  • Publication number: 20190227919
    Abstract: Provided is a electronic apparatus that determines whether or not a product guarantee period can be satisfied early and intelligently. When a write guarantee value is taken to be (Wmax), a total write amount is taken to be (Wsum), an average difference value per a unit period is taken to be (Ave.), a predicted write amount per unit period is taken to be (Wref), and a remaining guarantee period is taken to be (Dremain), a system-control unit 132 obtains the remaining guarantee period by the calculation: (Dremain)=[(Wmax)ยท(Wsum)]/[(Ave.)+(Wref)]; and furthermore, when an elapsed period is taken to be (Dpast), a predicted usable period is taken to be (Dprediction), obtains the predicted usable period by the calculation: (Dprediction)=(Dpast)+(Dremain); and in the case where (Dprediction)<(Dguarantee), the system-control unit 132 determines there is excessive writing.
    Type: Application
    Filed: January 24, 2019
    Publication date: July 25, 2019
    Applicant: KYOCERA Document Solutions Inc.
    Inventor: Kenichiro NITTA
  • Publication number: 20190227920
    Abstract: An arrangement is disclosed comprising a memory arrangement configured to store and retrieve data; an interface to allow data to be received and transmitted by the arrangement from a host and a processor configured to dynamically conduct automatic performance tuning for the memory arrangement.
    Type: Application
    Filed: January 19, 2018
    Publication date: July 25, 2019
    Inventors: Darin Edward GERHART, Cory LAPPI, Nicholas Edward ORTMEIER
  • Publication number: 20190227921
    Abstract: A computer having a plurality of accounts and a storage device having a host interface, a controller, non-volatile storage media, and firmware. Each account has a namespace identifier that identifies the allocation of a portion of the non-volatile storage media to the account. The storage device stores a namespace map that defines the mapping between the logical addresses in a namespace identified by the namespace identifier and the logical addresses, in a capacity of the storage device, that correspond to the portion of the non-volatile storage media allocated to and accessible to the account. The account accesses the portion of the non-volatile storage media via the logical addresses in the namespace. The firmware of the storage device configures the controller to convert, using the namespace map, the logical addresses in the namespace to the physical addresses of the portion of the non-volatile storage media.
    Type: Application
    Filed: January 19, 2018
    Publication date: July 25, 2019
    Inventor: Alex Frolikov
  • Publication number: 20190227922
    Abstract: A data storage interconnect system has a network controller with a first expansion bus switch connected to a central processor over an expansion bus interface thereof, and a first transceiver connected to the first expansion bus switch. A directly attachable storage host with a second transceiver is communicatively linked to the first transceiver of the network controller. A second expansion bus switch is connected to the second transceiver, and is connectable to a removable storage device over an expansion bus interface. The removable storage device communicates with the second expansion bus switch over an expansion bus protocol. A data transmission link interconnects the first transceiver and the second transceiver, with expansion bus protocol data traffic between the first expansion bus switch and the second expansion bus switch being carried thereon.
    Type: Application
    Filed: January 22, 2018
    Publication date: July 25, 2019
    Inventors: Peter Braun, Melvin Lahip
  • Publication number: 20190227923
    Abstract: A memory system includes: a memory device suitable for storing data; a history data generator suitable for generating history data having a plurality of physical memory block address data; and a history data analyzer suitable for detecting logical memory block address data in the history data.
    Type: Application
    Filed: July 25, 2018
    Publication date: July 25, 2019
    Inventor: Jeen PARK
  • Publication number: 20190227924
    Abstract: Methods and systems are provided for compression and reconstruction of system data. A controller of a memory system includes a compression component for searching for a pattern of an array of system data including a plurality of elements and compressing the array of system data based on the pattern. The array of system data includes neighbor elements corresponding to a first pattern, among the plurality of elements. The compressed system data includes: first information including a first bit indicating a first content; and second information including a first bitmap, each bit of the first bitmap indicating whether a corresponding element is a first element among the neighbor elements of the first pattern.
    Type: Application
    Filed: January 23, 2019
    Publication date: July 25, 2019
    Inventors: Igor NOVOGRAN, Alexander IVANIUK
  • Publication number: 20190227925
    Abstract: A data storage apparatus executes a garbage collection method. The data storage apparatus includes a NAND flash memory including blocks each of which includes pages. In the garbage collection method, a destination block is selected from the blocks. Mapping tables and a relevance bitmap are built before writing user data into the destination block. Each bit in the relevance bitmap is related to one of the mapping tables. A victim block is selected from the blocks. Some of the mapping tables are read according to the relevance bitmap for the victim block. It is determined whether if the pages, one after another, of the victim block are in the read mapping tables. The page is set to be a valid page if a page of the victim block is in a read mapping table. Data in the valid pages is written into another block.
    Type: Application
    Filed: January 23, 2018
    Publication date: July 25, 2019
    Inventors: Yen-Lan Hsu, Bo-Shian Hsu, Po-Chien Chang
  • Publication number: 20190227926
    Abstract: A method for managing a flash memory module, an associated flash memory controller and an associated electronic device are provided, wherein the method includes: when the flash memory module is powered on, and a garbage collection operation is not finished before the flash memory module is powered on: determining a progress of the garbage collection operation to generate a determination result; and determining to discard a target block in the garbage collection operation or to write dummy data into remaining pages of the target block according to the determination result.
    Type: Application
    Filed: June 13, 2018
    Publication date: July 25, 2019
    Inventor: Kuan-Yu Ke
  • Publication number: 20190227927
    Abstract: A data storage device utilized for dynamically executing a garbage-collection process is provided. The data storage device includes a flash memory and a controller. The flash memory includes a plurality of blocks. Each of the blocks includes a plurality of pages. The controller is coupled to the flash memory and is configured to calculate whether or not the number of spare blocks is lower than a predetermined value, and to execute the garbage-collection process according to the difference value between the predetermined value and the number of spare blocks. The garbage-collection process merges at least two data blocks to release at least one spare block.
    Type: Application
    Filed: January 4, 2019
    Publication date: July 25, 2019
    Inventor: Ningzhong MIAO
  • Publication number: 20190227928
    Abstract: In an embodiment, a partition cost of one or more of the plurality of partitions and a data block cost for one or more data blocks that may be subjected to a garbage collection operation are determined. The partition cost and the data block cost are combined into an overall reclaim cost by specifying both the partition cost and the data block cost in terms of a computing system latency. A byte constant multiplier that is configured to modify the overall reclaim cost to account for the amount of data objects that may be rewritten during the garbage collection operation may be applied. The one or more partitions and/or one or more data blocks that have the lowest overall reclaim cost while reclaiming an acceptable amount of data block space may be determined and be included in a garbage collection schedule.
    Type: Application
    Filed: April 1, 2019
    Publication date: July 25, 2019
    Inventors: Shane Kumar MAINALI, Rushi Srinivas SURLA, Peter BODIK, Ishai MENACHE, Yang LU
  • Publication number: 20190227929
    Abstract: A data storage device includes a memory device and a memory controller. The memory controller is coupled to the memory device and configured to access the memory device and establish a physical to logical address mapping table and a logical address section table. The logical address section table records statuses of a plurality of logical address sections. Each status is utilized to indicate whether the physical to logical address mapping table records any logical address that belongs to the corresponding logical address section. The logical address section table includes a plurality of section bits in a plurality of dimensions. When the memory controller receives a write command to write data of a first predetermined logical address, the memory controller determines the section bit of each dimension corresponding to the first predetermined logical address, and accordingly sets a corresponding digital value for each section bit.
    Type: Application
    Filed: January 16, 2019
    Publication date: July 25, 2019
    Inventors: Hsuan-Ping LIN, Chia-Chi LIANG
  • Publication number: 20190227930
    Abstract: A method is applied to an electronic device, all storage space of the electronic device includes internal storage space and at least one external storage space, and the method includes: receiving a data write instruction, where the data write instruction carries target write data; selecting at least one of all the storage space as target storage space according to a preset rule, and performing a write operation in the target storage space; and after the write operation is completed, updating a virtual file that is obtained by summarizing and combining files of a same file type in all the storage space of the electronic device and that is stored in a virtual storage card, and updating a mapping relationship between a virtual storage path corresponding to the virtual storage card and a physical storage path corresponding to the target storage space.
    Type: Application
    Filed: June 30, 2016
    Publication date: July 25, 2019
    Applicant: Huawei Technologies Co., Ltd.
    Inventors: Yongfeng TU, Wenmei GAO, Yuanli GAN
  • Publication number: 20190227931
    Abstract: A data storage device may include: a nonvolatile memory device including first and second memory regions configured to be read-interleaved with each other; and a processor configured to select a first read command among read commands received from a host device, select a second read command among the read commands excluding the first read command, and control the nonvolatile memory device to perform map read on the first and second read commands at the same time. The processor selects, as the second read command, at least one read command that is configured to be read-interleaved with the first read command.
    Type: Application
    Filed: August 23, 2018
    Publication date: July 25, 2019
    Inventor: In JUNG
  • Publication number: 20190227932
    Abstract: A simultaneous multithread (SMT) processor having a shared dispatch pipeline includes a first circuit that detects a cache miss thread. A second circuit determines a first cache hierarchy level at which the detected cache miss occurred. A third circuit determines a Next To Complete (NTC) group in the thread and a plurality of additional groups (X) in the thread. The additional groups (X) are dynamically configured based on the detected cache miss. A fourth circuit determines whether any groups in the thread are younger than the determined NTC group and the plurality of additional groups (X), and flushes all the determined younger groups from the cache miss thread.
    Type: Application
    Filed: April 2, 2019
    Publication date: July 25, 2019
    Inventors: Gregory W. Alexander, Brian D. Barrick, Thomas W. Fox, Christian Jacobi, Anthony Saporito, Somin Song, Aaron Tsai
  • Publication number: 20190227933
    Abstract: Disclosed aspects relate to accelerator sharing among a plurality of processors through a plurality of coherent proxies. The cache lines in a cache associated with the accelerator are allocated to one of the plurality of coherent proxies. In a cache directory for the cache lines used by the accelerator, the status of the cache lines and the identification information of the coherent proxies to which the cache lines are allocated are provided. Each coherent proxy maintains a shadow directory of the cache directory for the cache lines allocated to it. In response to receiving an operation request, a coherent proxy corresponding to the request is determined. The accelerator communicates with the determined coherent proxy for the request.
    Type: Application
    Filed: April 4, 2019
    Publication date: July 25, 2019
    Inventors: Peng Fei BG Gou, Yang Liu, Yang Fan EL Liu, Yong Lu
  • Publication number: 20190227934
    Abstract: An example method of maintaining cache coherency in a virtualized computing system includes: trapping access to a memory page by guest software in a virtual machine at a hypervisor managing the virtual machine, where the memory page is not mapped in a second stage page table managed by the hypervisor; performing cache coherency maintenance for instruction and data caches of a central processing unit (CPU) in the virtualized computing system in response to the trap; mapping the memory page in the second stage page table with execute permission; and resuming execution of the virtual machine.
    Type: Application
    Filed: January 23, 2018
    Publication date: July 25, 2019
    Inventors: Ye LI, Cyprien LAPLACE, Andrei WARKENTIN, Alexander FAINKICHEN, Regis DUCHESNE
  • Publication number: 20190227935
    Abstract: In a data write control method, a write control apparatus currently runs a program in a write-back mode in which data are written to a volatile memory. When the apparatus detects that a quantity of dirty blocks in the volatile memory has reached a threshold, it predicts a first amount of execution progress of the program within a prediction time period under an assumption of the apparatus being in a write-through mode in which data are written to the volatile memory and a non-volatile memory. The apparatus also predicts a second amount of execution progress of the program within the prediction time period under an assumption of the apparatus being in the write-back mode. When the predicted first amount of execution progress exceeds the predicted second amount of execution progress, the apparatus switches from the write-back mode to the write-through mode.
    Type: Application
    Filed: April 1, 2019
    Publication date: July 25, 2019
    Applicant: HUAWEI TECHNOLOGIES CO.,LTD.
    Inventors: Hehe LI, Yongpan LIU, Qinghang ZHAO, Rong LUO, Huazhong YANG
  • Publication number: 20190227936
    Abstract: A heterogeneous computing system includes a first processor and a second processor that are heterogeneous. The second processor is configured to sequentially execute a plurality of kernels offloaded from the first processor. A coherency controller is configured to classify each of the plurality of kernels into one of a first group and a second group, based on attributes of instructions included in each of the plurality of kernels before the plurality of kernels are executed and is further configured to reclassify one of the plurality of kernels from the second group to the first group based on a transaction generated between the first processor and the second processor during execution of the one of the plurality of kernels.
    Type: Application
    Filed: August 10, 2018
    Publication date: July 25, 2019
    Inventor: Hyunjun Jang
  • Publication number: 20190227937
    Abstract: A method for operating a database and a cache of at least a portion of the database may include receiving a plurality of read requests to read a data entity from the database and counting respective quantities of the requests serviced from the database and from the cache. The method may further include receiving a write request to alter the data entity in the database and determining whether to update the cache to reflect the alteration to the data entity in the write request according to the quantity of the requests serviced from the database and the quantity of the requests serviced from the cache. In an embodiment, the method further includes causing the cache to be updated when a ratio of the quantity of the requests serviced from the database to the quantity of the requests serviced from the cache exceeds a predetermined threshold.
    Type: Application
    Filed: January 23, 2018
    Publication date: July 25, 2019
    Inventors: Hari Ramamurthy, Chandan Venkatesh, Krishna Guggulotu, Rageesh Thekkeyil
  • Publication number: 20190227938
    Abstract: An example of a system includes a host interface, a set of non-volatile memory cells, and one or more control circuits coupled to the host interface and coupled to the set of non-volatile memory cells. The one or more control circuits include a portion of a Random Access Memory (RAM) configured as an overlay RAM. The one or more control circuits are configured to transfer overlay code to the overlay RAM via the host interface.
    Type: Application
    Filed: January 22, 2018
    Publication date: July 25, 2019
    Applicant: WESTERN DIGITAL TECHNOLOGIES, INC.
    Inventors: Raghavendra Das Gopalakrishnan, Pankaj Dixit
  • Publication number: 20190227939
    Abstract: There are provided a memory controller and a memory system having the same. A memory controller includes: a command queue for queuing commands and outputting command information including Information of a previous command and a subsequent command; a command detector for outputting a detection signal according to the command information; and a command generator for generating the command and outputting a management command for managing a last command immediately following the previous command in response to the detection signal.
    Type: Application
    Filed: August 21, 2018
    Publication date: July 25, 2019
    Inventor: Jeen PARK
  • Publication number: 20190227940
    Abstract: A memory system includes a nonvolatile memory device configured to store a plurality of segments each of which is configured by a plurality of map data, a first region configured to cache a target segment including target map data among the plurality of segments, a second region configured to cache a target map data group selected among a plurality of map data groups in the target segment and a controller configured to control data caching of the first region and the second region, wherein each of plurality of map data groups includes a plurality of map data, and the second region caches data of a unit smaller than the first region.
    Type: Application
    Filed: August 29, 2018
    Publication date: July 25, 2019
    Inventor: Jee Yul KIM
  • Publication number: 20190227941
    Abstract: A dedupable cache is disclosed. The dedupable cache may include cache memory including both a dedupable read cache and a non-dedupable write buffer. The dedupable cache may also include a deduplication engine to manage reads from and writes to the dedupable read cache, and may return a write status signal indicating whether a write to the dedupable read cache was successful or not. The dedupable cache may also include a cache controller, which may include: a cache hit/miss check to determine whether an address in a request may be found in the dedupable read cache; a hit block to manage data accesses when the requested data may be found in the dedupable read cache; a miss block to manage data accesses when the requested data is not found in the dedupable read cache; and a history storage to store information about accesses to the data in the dedupable read cache.
    Type: Application
    Filed: March 23, 2018
    Publication date: July 25, 2019
    Inventors: Mu Tien CHANG, Andrew CHANG, Dongyan JIANG, Hongzhong ZHENG
  • Publication number: 20190227942
    Abstract: An information handling system includes an input/output (I/O) device and an input/output memory management unit (I/OMMU). The I/OMMU is configured to translate virtual addresses from the I/O device to physical addresses in a memory. The I/O device is configured to send a first virtual address to the I/OMMU, to receive an error indication from the I/OMMU, and to send an interrupt in response to the error indication, wherein the error indication indicates that the I/OMMU failed to translate the particular first address into a first physical address.
    Type: Application
    Filed: January 24, 2018
    Publication date: July 25, 2019
    Inventor: Shyamkumar T. Iyer
  • Publication number: 20190227943
    Abstract: A method for accessing a physical region page (PRP) list includes obtaining a PRP address of a PRP list, in which the PRP address has M bits; performing operation to the first N bits of the PRP address and the N+1 th to Mth bits of the PRP address respectively to obtain a page base address if the PRP address is within a page boundary; and performing operation to the first N bits of the PRP address and the N+1 th to Mth bits of the PRP address respectively to obtain next PRP address pointer if the PRP address reaches the page boundary. N is an integer, and M is an integer larger than N.
    Type: Application
    Filed: July 18, 2018
    Publication date: July 25, 2019
    Inventor: Wen-Cheng CHEN
  • Publication number: 20190227944
    Abstract: Methods and apparatus for locking at least a portion of a shared memory resource. In one embodiment, an electronic device configured to lock at least a portion of a shared memory is disclosed. The electronic device includes a host processor, at least one peripheral processor and a physical bus interface configured to couple the host processor to the peripheral processor. The electronic device further includes a software framework that is configured to: attempt to lock a portion of the shared memory; verify that the peripheral processor has not locked the shared memory; when the portion of the shared memory is successfully locked via the verification that the peripheral processor has not locked the portion of the shared memory, execute a critical section of the shared memory; and otherwise attempt to lock the at least the portion of the shared memory at a later time.
    Type: Application
    Filed: January 28, 2019
    Publication date: July 25, 2019
    Inventors: Vladislav Petkov, Haining Zhang, Karan Sanghi, Saurabh Garg
  • Publication number: 20190227945
    Abstract: Efficiently generating effective address translations for memory management test cases including obtaining a first set of EAs, wherein each EA comprises an effective segment ID and a page, wherein each effective segment ID of each EA in the first set of EAs is mapped to a same first effective segment; obtaining a set of virtual address corresponding to the first set of EAs; translating the first set of EAs by applying a hash function to each virtual address in the set of virtual addresses to obtain a first set of PTEG addresses mapped to a first set of PTEGs; and generating a translation for a second set of EAs to obtain a second set of PTEG addresses mapped to the first set of PTEGs.
    Type: Application
    Filed: April 2, 2019
    Publication date: July 25, 2019
    Inventors: MANOJ DUSANAPUDI, SHAKTI KAPOOR
  • Publication number: 20190227946
    Abstract: Aspects of managing Translation Lookaside Buffer (TLB) units are described herein. The aspects may include a memory management unit (MMU) that includes one or more TLB units and a control unit. The control unit may be configured to identify one from the one or more TLB units based on a stream identification (ID) included in a received virtual address and, further, to identify a frame number in the identified TLB unit. A physical address may be generated by the control unit based on the frame number and an offset included in the virtual address.
    Type: Application
    Filed: February 26, 2019
    Publication date: July 25, 2019
    Inventors: Tianshi CHEN, Qi GUO, Yunji CHEN
  • Publication number: 20190227947
    Abstract: A processor includes a translation lookaside buffer (TLB) to store a TLB entry, wherein the TLB entry comprises a first set of valid bits to identify if the first TLB entry corresponds to a virtual address from a memory access request, wherein the valid bits are set based on a first page size associated with the TLB entry from a first set of different page sizes assigned to a first probe group; and a control circuit to probe the TLB for each page size of the first set of different page sizes assigned to the first probe group in a single probe cycle to determine if the TLB entry corresponds to the virtual address from the memory access request.
    Type: Application
    Filed: March 29, 2019
    Publication date: July 25, 2019
    Inventors: David Pardo Keppel, Binh Pham
  • Publication number: 20190227948
    Abstract: A processor includes an associative memory including ways organized in an asymmetric tree structure, a replacement control unit including a decision node indicator whose value determines the side of the tree structure to which a next memory element replacement operation is directed, and circuitry to cause, responsive to a miss in the associative memory while the decision node indicator points to the minority side of the tree structure, the decision node indicator to point a majority side of the tree structure, and to determine, responsive to a miss while the decision node indicator points to the majority side of the tree structure, whether or not to cause the decision node indicator to point to the minority side of the tree structure, the determination being dependent on a current replacement weight value. The replacement weight value may be counter-based or a probabilistic weight value.
    Type: Application
    Filed: March 30, 2019
    Publication date: July 25, 2019
    Applicant: Intel Corporation
    Inventors: Chunhui Zhang, Robert S. Chappell, Yury N. Ilin
  • Publication number: 20190227949
    Abstract: Technologies for accelerated edge data access and physical data security include an edge device that executes services associated with endpoint devices. An address decoder translates a virtual address generated by a service into an edge location using an edge translation table. If the edge location is not local, the edge device may localize the data and update the edge translation table. The edge device may migrate the service to another edge location, including migrating the edge translation table. The edge device may monitor telemetry and determine on a per-tenant basis whether a physical attack condition is present. If present, the edge device instructs a data resource to wipe an associated tenant data range. The determinations to localize remote data, to migrate the service, and/or whether the physical attack condition is present may be performed by an accelerator of the edge device. Other embodiments are described and claimed.
    Type: Application
    Filed: March 29, 2019
    Publication date: July 25, 2019
    Inventor: Francesc Guim Bernat
  • Publication number: 20190227950
    Abstract: A multi-rank memory system in which calibration operations are performed between a memory controller and one rank of memory while data is transferred between the controller and other ranks of memory. A memory controller performs a calibration operation that calibrates parameters pertaining to transmission of data via a first data bus between the memory controller and a memory device in a first rank of memory. While the controller performs the calibration operation, the controller also transfers data with a memory device in a second rank of memory via a second data bus.
    Type: Application
    Filed: February 4, 2019
    Publication date: July 25, 2019
    Inventors: Ian Shaeffer, Frederick A. Ware
  • Publication number: 20190227951
    Abstract: Embodiments are directed to memory protection with hidden inline metadata. An embodiment of an apparatus includes processor cores; a computer memory for the storage of data; and cache memory communicatively coupled with one or more of the processor cores, wherein one or more processor cores of the plurality of processor cores are to implant hidden inline metadata in one or more cachelines for the cache memory, the hidden inline metadata being hidden at a linear address level.
    Type: Application
    Filed: March 29, 2019
    Publication date: July 25, 2019
    Applicant: Intel Corporation
    Inventors: David M. Durham, Ron Gabor
  • Publication number: 20190227952
    Abstract: An information processing system includes a first device and a second device that is configured to perform a short-range wireless communication with the first device. In a case where the first device requests an external authentication apparatus for authentication, the first device sends information regarding the second device to the external authentication apparatus.
    Type: Application
    Filed: April 1, 2019
    Publication date: July 25, 2019
    Applicant: FUJI XEROX CO., LTD.
    Inventor: Wataru YAMAIZUMI
  • Publication number: 20190227953
    Abstract: Methods, circuitries, and systems for real-time protection of a stack are provided. A stack protection circuitry includes interface circuitry and computation circuitry. The interface is circuitry configured to receive a return instruction from a central processing unit (CPU). The computation circuitry is configured to, in response to the return instruction, generate protection data that i) identifies a new topmost return address location that is below a current protected topmost return address location and ii) specifies read only access for the new topmost return address location. The interface circuitry is configured to provide the protection data to a memory protection unit to cause the memory protection unit to enforce a read only access restriction on the new topmost return address location.
    Type: Application
    Filed: January 22, 2018
    Publication date: July 25, 2019
    Inventors: Sanjay Trivedi, Ramesh Babu
  • Publication number: 20190227954
    Abstract: In one or more embodiments, one or more systems, methods, and/or processes may determine an address of a memory medium of an information handling system associated with an exception; determine respective address spaces of the device drivers loaded in the memory medium; determine an address space of address spaces that includes the address associated with the exception; determine a device driver of the device drivers based at least on the address space; query the device driver for an identification of the device driver; determine if the device driver provides the identification of the device driver; if the device driver provides the identification of the device driver, output the identification of the device driver and exception information associated with the exception; and if the device driver does not provide the identification of the device driver, search for identification information of the device driver within the address space.
    Type: Application
    Filed: January 25, 2018
    Publication date: July 25, 2019
    Inventor: Rui Shi
  • Publication number: 20190227955
    Abstract: A system for connecting a web POS (Point of Sale) system or a smart retailer system with a peripheral device is provided. Through a service software running in an operating system, a browser can transmit a relevant control instruction to the service software in a manner of conforming to the Web Socket protocol. After the service software reassembles a control instruction, the service software connects and controls the peripheral device through a connection interface corresponding to the peripheral device, so that any device can use the browser to execute the web POS system and control various peripheral devices to increase the convenience for the application of the web POS system.
    Type: Application
    Filed: April 2, 2019
    Publication date: July 25, 2019
    Inventor: KUANG HUNG CHENG
  • Publication number: 20190227956
    Abstract: An output processing apparatus according to one aspect of an embodiment includes an acquisition unit and an output processing unit. The acquisition unit acquires, from an external apparatus, extended application information. The extended application information includes (i) a display component that is used in operating an extended application included in the external apparatus and (ii) a descriptor that specifies a screen hierarchy of the display component. The output processing unit displays the display component in the screen hierarchy specified by the extended application information acquired by the acquisition unit.
    Type: Application
    Filed: November 9, 2017
    Publication date: July 25, 2019
    Applicant: DENSO TEN Limited
    Inventor: Naoto MATSUSHITA
  • Publication number: 20190227957
    Abstract: Techniques are disclosed for filtering input/output (I/O) requests in a virtualized computing environment. In some embodiments, a system stores first data in a page of memory, where after the first data is stored in the page of memory, the page of memory is free for allocation to a first memory consumer (e.g., an I/O filter instantiated in a virtualization layer of the virtualized computing environment) and a second memory consumer. The first memory consumer retains a reference to the page of memory. The first memory consumer receives a data request from a virtual computing instance. Based on the data request, the first memory consumer retrieves the first data using the reference to the page of memory. After retrieving the first data, the system returns the first data to the virtual computing instance. While the first memory consumer has the reference to the page of memory, the page of memory can be allocated to the second memory consumer without notifying the first memory consumer.
    Type: Application
    Filed: January 24, 2018
    Publication date: July 25, 2019
    Applicant: VMware, Inc.
    Inventors: Abhishek SRIVASTAVA, Saksham JAIN, Nikolay ILDUGANOV, Christoph KLEE, Ashish KAILA
  • Publication number: 20190227958
    Abstract: A method receives an inbound request to be processed based on multiple outbound service invocations of multiple outbound services. The method accesses expected response times for the inbound request for each of the multiple outbound services. The method determines which one or more of the multiple outbound services to invoke asynchronously and which one or more of the multiple outbound services to invoke synchronously based on the expected response times for the inbound request for each of the multiple outbound services. The method invokes asynchronously the one or more of the multiple outbound services determined to be invoked asynchronously, invokes synchronously the one or more of the multiple outbound services determined to be invoked synchronously.
    Type: Application
    Filed: April 3, 2019
    Publication date: July 25, 2019
    Inventor: Martin A. ROSS
  • Publication number: 20190227959
    Abstract: A packet processing system having each of a plurality of hierarchical clients and a packet memory arbiter serially communicatively coupled together via a plurality of primary interfaces thereby forming a unidirectional client chain. This chain is then able to be utilized by all of the hierarchical clients to write the packet data to or read the packet data from the packet memory.
    Type: Application
    Filed: March 29, 2019
    Publication date: July 25, 2019
    Inventors: Enrique Musoll, Tsahi Daniel
  • Publication number: 20190227960
    Abstract: A unit for managing initial saving, subsequent saving, and reading technical information such as plans, figures, executed works, manuals, notebooks, a visitors' book, maintenance records, and the like of a site such as a building, a ship, a platform, an industrial facility, and the like, the unit comprising a casing incorporating an electronic circuit comprising a non-volatile memory, a USB connector and a processor controlled by firmware controlling the management of inputs and outputs and of the memory, wherein the firmware comprises means for managing the inputs-outputs according to the standard USB protocol, and in addition for preventing the change command from modifying information previously recorded in the non-volatile memory.
    Type: Application
    Filed: June 27, 2016
    Publication date: July 25, 2019
    Applicant: E-GLOO DEVELOPMENT
    Inventors: Bernard Blanchet, Edouard De Ledinghen, Lionel Laurent
  • Publication number: 20190227961
    Abstract: Systems, apparatuses and methods may provide for technology that receives, at a remote access controller of a computing system, configuration data associated with a non-volatile memory of the computing system, wherein the configuration data is received while the computing system is in a sleep state. The technology may also store the configuration data and provide a host processor of the computing system with access to the configuration data. In one example, receipt of the configuration data bypasses a memory configuration-related reboot of the computing system and the configuration data is received via an out-of-band management interface.
    Type: Application
    Filed: March 29, 2019
    Publication date: July 25, 2019
    Inventors: Daniel Osawa, Kelly Couch, Maciej Plucinski
  • Publication number: 20190227962
    Abstract: Systems, methods, and apparatus are described that provide for communicating coexistence messages over a multi-drop serial bus. A data communication method includes configuring a common memory map at each of a plurality of devices coupled to a serial bus, receiving at a first device coupled to the serial bus, first coexistence information directed to a second device coupled to the serial bus, generating at the first device, a coexistence message that includes the first coexistence information, and transmitting the coexistence message to the second device over the serial bus. The first coexistence information in the coexistence message may be addressed to a location in the common memory map calculated based on a destination address associated with the first coexistence information and a unique identifier of the first device.
    Type: Application
    Filed: November 16, 2018
    Publication date: July 25, 2019
    Inventors: Helena Deirdre O'SHEA, Lalan Jee MISHRA, Richard Dominic WIETFELDT, Mohit Kishore PRASAD, Amit GIL, Gary CHANG
  • Publication number: 20190227963
    Abstract: Systems or methods of the present disclosure may provide high-bandwidth, low-latency connectivity for inter-die and/or intra-die communication of a modularized integrated circuit system. Such an integrated circuit system may include a first die of fabric circuitry sector(s), a second die of modular periphery intellectual property (IP), a passive silicon interposer coupling the first die to the second die, and a modular interface that includes a network-on-chip (NOC). The modular interface may provide high-bandwidth, low-latency communication between the first die and the second, between the fabric circuitry sector(s), and between the first die and a third die.
    Type: Application
    Filed: March 28, 2019
    Publication date: July 25, 2019
    Inventors: George Chong Hean Ooi, Lai Guan Tang, Chee Hak Teh
  • Publication number: 20190227964
    Abstract: An application processor includes a system bus, as well as a host processor, a voice trigger system, and an audio subsystem that are electrically connected to the system bus. The voice trigger system performs a voice trigger operation and issues a trigger event based on a trigger input signal that is provided through a trigger interface. The audio subsystem processes audio streams that are replayed or recorded through an audio interface, and receives an interrupt signal through the audio interface while an audio replay operation is performed through the audio interface.
    Type: Application
    Filed: November 9, 2018
    Publication date: July 25, 2019
    Inventor: SUN-KYU KIM
  • Publication number: 20190227965
    Abstract: A processor includes cores to execute instructions, and circuitry to detect a system management interrupt (SMI) event on the processor, direct an indication of the SMI event to an arbiter on a controller hub, and receive an interrupt signal from the arbiter. The processor also includes an SMI handler to take action in response to the interrupt, and circuitry to communicate the interrupt signal to the cores. The cores include circuitry to pause while the SMI handler responds to the interrupt. The interrupt handler includes circuitry to determine that a second SMI event detected on the processor or controller hub is pending, and to take action in response. The interrupt handler includes circuitry to set an end-of-SMI bit to indicate that the interrupt handler has completed its actions. The controller includes circuitry to prevent the arbiter from issuing another interrupt to the processor while this bit is false.
    Type: Application
    Filed: March 29, 2019
    Publication date: July 25, 2019
    Applicant: Intel Corporation
    Inventor: Sarathy Jayakumar
  • Publication number: 20190227966
    Abstract: A processor includes a central processing unit (CPU) and a direct memory access (DMA) adapter circuit. The DMA adapter circuit includes a DMA controller circuit and is configured to interface with a legacy internal hardware peripheral and with a DMA-enabled internal hardware peripheral. The DMA-enabled internal hardware peripheral includes a first special function register (SFR). The legacy internal hardware peripheral includes no DMA features. The CPU is configured to execute a legacy application that accesses a setting in memory through the legacy internal hardware peripheral. Execution of the legacy application includes access by the CPU of the setting in memory. The DMA controller circuit is configured to access the setting in memory during execution of a DMA-enabled application through the DMA-enabled internal hardware peripheral.
    Type: Application
    Filed: April 3, 2018
    Publication date: July 25, 2019
    Applicant: Microchip Technology Incorporated
    Inventors: Joseph Julicher, Yong Yuenyongsgool
  • Publication number: 20190227967
    Abstract: An application processor includes a system bus, a host processor, a voice trigger system and an audio subsystem that are electrically connected to the system bus. The voice trigger system performs a voice trigger operation and issues a trigger event based on a trigger input signal that is provided through a trigger interface. The audio subsystem processes audio streams through an audio interface of the audio subsystem.
    Type: Application
    Filed: November 8, 2018
    Publication date: July 25, 2019
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Sun-Kyu KIM, Byung-Tak LEE
  • Publication number: 20190227968
    Abstract: Systems and methods are disclosed method for operating a serial interconnect of a computer system in a time deterministic manner. An exemplary method comprises that a command to be sent over the serial interconnect in a transaction is to be executed at a specific time. A delay period for the command to be sent from a master of the computer system to a slave of the computer system via the serial bus is determined, where the delay period determined based on a length of an arbitration phase of the transaction. The command is then sent to the slave of the computer system via the serial bus after the delay period.
    Type: Application
    Filed: August 21, 2018
    Publication date: July 25, 2019
    Inventors: CHRISTOPHER KONG YEE CHUN, CHRIS ROSOLOWSKI