Patents Issued in April 9, 2020
-
Publication number: 20200110685Abstract: The rate at which reads on a target memory portion initiate error recovery procedures can be monitored in real-time. Trigger rates can be used to perform analysis of a memory sub-system or to implement improvements in the memory sub-system. Trigger rate monitoring can include accessing a count of error recovery initializations for a target memory portion, wherein the count of error recovery initializations corresponds to a number of times a first stage of a multi-stage error recovery process was performed. Trigger rate monitoring can further include accessing a count of read operations corresponding to the target memory portion. The count of error recovery initializations and the count of read operations can be used to compute a trigger rate. The trigger rate, or multiple trigger rates from various times or from various target memory portions, can be used to compute a metric for the memory portion(s).Type: ApplicationFiled: October 9, 2018Publication date: April 9, 2020Inventor: Francis Chew
-
Publication number: 20200110686Abstract: An apparatus for attachable sensory assembly includes a plurality of gear tracks, a sensory device, a first structure, and a second structure. The sensory device is suspended by the first structure and the second structure, where the sensory device is moveable along a length of the first structure and a length of the second structure. A first end and a second end of the first structure are each disposed on a first portion of the plurality of gear tracks, where the first portion of the plurality of gear tracks guide the first end and the second end of the first structure. A first end and a second end of the second structure are each disposed on a second portion the plurality of gear tracks, where the second portion of the plurality of gear tracks guide the first end and the second end of the second structure.Type: ApplicationFiled: October 3, 2018Publication date: April 9, 2020Inventors: Alexandru Z. Nagy, David Monczynski, Kaoru Stabnow, Jimmy Y Wong, Ingrid Lawrendra
-
Publication number: 20200110687Abstract: A differential resource analyzer performs differential resource profiling of two applications. The two applications are made to perform an operation. The differential resource analyzer matches a first application task of a first application to a second application task of a second application based on a determination that the first application task is similar to the second application task, and measures resource consumed in the first application task and the second application task. Responsive to determining that the second application task consumes less resource than the first application task, the differential resource analyzer performs an action to reduce resource consumption by the first application based on the second application task.Type: ApplicationFiled: October 7, 2019Publication date: April 9, 2020Inventors: Yu Charlie Hu, Abhilash Jindal
-
Publication number: 20200110688Abstract: A customer-facing overhead management tool reduces the task of feature configuration to adjusting a scale representing relative feature availability. Features are configured by adjusting a graphical control element presented on a graphical user interface to activate or deactivate features based on relative weights and priorities associated with the features. Weights and priorities are stored within a configuration file underlying the control element and indicate an approximate order in which features will be deactivated upon “dialing down” the available features. The control element facilitates application resource management for the customer, as the customer may configure features to reduce overhead without knowledge of the underlying feature priorities and weights or relative overhead each feature incurs when activated. Customers may override the automatic feature adjustment by manually activating features which have been deactivated following a lowering of the value on the control element.Type: ApplicationFiled: October 4, 2018Publication date: April 9, 2020Inventor: Martin Tali
-
Publication number: 20200110689Abstract: A method for detecting abnormality adapted to detect abnormal operations of an operating system is provided. The method includes: calculating a safe range of usage of the operating system during one or more time periods according to a historical data stream; calculating abnormal ratios corresponding to the one or more time periods according to a current data stream and the safe range of usage; selecting one or more abnormal time periods from the one or more time periods according to a threshold and the abnormal ratios; calculating an abnormal indicator for each of the one or more abnormal time periods according to the historical data stream and the current data stream; and ranking the one or more abnormal time periods according to the abnormal indicator(s).Type: ApplicationFiled: February 21, 2019Publication date: April 9, 2020Applicant: Acer Cyber Security IncorporatedInventors: Chun-Hsien Li, Chien-Hung Li, Jun-Mein Wu, Ming-Kung Sun, Zong-Cyuan Jhang, Yin-Hsong Hsu, Chiung-Ying Huang, Tsung-Hsien Tsai
-
Publication number: 20200110690Abstract: Method, system, and programs for measuring user engagement with content items. In one example, a query is received. A set of content items related to the query is obtained. A presentation of at least one content item of the set of content items is provided on a user interface. A user activity related to the at least one content item is determined. An amount of time between a time at which the presentation of the at least one content item is provided on the user interface and a time at which the user activity occurred is determined. A score associated with the content item is determined based on the amount of time. Information related to user engagement with the set of content items is generated based on the score.Type: ApplicationFiled: December 11, 2019Publication date: April 9, 2020Inventors: Alyssa Glass, Xing Yi
-
Publication number: 20200110691Abstract: A processor may receive at least one test application corresponding to an application profile. The processor may simulate the at least one test application in a non-production environment for a plurality of infrastructure configurations to generate a plurality of test performance results. The processor may evaluate the plurality of test performance results to identify an optimal infrastructure configuration from among the plurality of infrastructure configurations for the application profile. The processor may apply the optimal infrastructure configuration to an application corresponding to the application profile that is deployed in a production environment.Type: ApplicationFiled: October 3, 2018Publication date: April 9, 2020Applicant: Capital One Services, LLCInventors: Jonathan H. Bryant, Jagadesh V. Gadiyaram, Thomas Caputo
-
Publication number: 20200110692Abstract: A method and a computer program product for latency measurement in an I/O operation. A storage system measures time periods taken in a write I/O operation and, using the measures time periods in the write I/O operation, the storage system monitors a delay that is caused by at least one of a host and a storage area network. A storage system measures time periods taken in a read I/O operation and, using the measures time periods in the read I/O operation, the storage system monitors a delay that is caused by at least one of a host and a storage area network in the read I/O operation.Type: ApplicationFiled: October 3, 2018Publication date: April 9, 2020Inventors: Denis Senin, Roderick G. C. Moore, Dan Critchley, Jonathan W. L. Short, Tim McCarthy
-
Publication number: 20200110693Abstract: A method, system and computer program product for detecting potential failures in a continuous delivery pipeline. A machine learning model is created to predict whether changed portion of codes under development at various stages of the continuous delivery pipeline will result in a pipeline failure. After creating the machine learning model, log file(s) may be received that were generated by development tool(s) concerning a changed portion of code under development at a particular stage of the continuous delivery pipeline. The machine learning model provides relationship information between the log file(s) and the changed portion of code. A message is then generated and displayed based on this relationship information, where the message may provide a prediction or a recommendation concerning potential failures in the continuous delivery pipeline. In this manner, the potential failures in the continuous delivery pipeline may be prevented without requiring context switching.Type: ApplicationFiled: November 20, 2019Publication date: April 9, 2020Inventors: Bradley C. Herrin, Alexander Sobran, Bo Zhang, Xianjun Zhu
-
Publication number: 20200110694Abstract: Various embodiments comprise systems, methods, architectures, mechanisms or apparatus configured for automatically generating a testing script.Type: ApplicationFiled: October 9, 2018Publication date: April 9, 2020Applicant: CHARTER COMMUNICATIONS OPERATING, LLCInventor: Mark Elking
-
Publication number: 20200110695Abstract: Dynamic integration of command line utilities is disclosed. For example, a host has a processor and a memory, where the memory stores a first program with a command line interface (CLI). A program testing module executes on the processor to discover a plurality of commands accepted by the CLI, where a command of the plurality of commands additionally accepts a subcommand and an argument. A first input data type associated with the first command is determined. A first test case is generated that invokes the first command with first test data of the first input data type. A second input data type that is incompatible with the command is determined based on the first input data type. A second test case that invokes the command with second test data of the second input data type is generated and both test cases are executed.Type: ApplicationFiled: October 4, 2018Publication date: April 9, 2020Inventors: Og Benso Maciel, Djebran Lezzoum
-
Publication number: 20200110696Abstract: Implementations include receiving, by a DDT platform, computer-readable files including test data, the test data including data to execute at least one transaction during testing of a software system that is at least partially hosted by a vendor back-end system, the software system being configured for customer use, providing, by the DDT platform, one or more test scenarios for execution by the software system on the vendor back-end system, the one or more test scenarios including a set of activities to conduct transactions by the software system using at least a portion of the test data, scheduling, by a scheduler of the DDT platform, execution of the one or more test scenarios using a test harness of the DDT platform, receiving, by the DDT platform, test results from the vendor back-end system, and comparing, by the DDT platform, the test results to expected results to provide a comparison as output.Type: ApplicationFiled: October 8, 2018Publication date: April 9, 2020Inventors: Michel-Etienne Liegard, Antoine Sebilleau
-
Publication number: 20200110697Abstract: Methods for enhancing the speed performance of solid-state storage devices using stream-aware garbage collection. A garbage collection method in according to an embodiment includes: searching, in each of a plurality of super-block groups G, for a super-block set C that satisfies: all of the super-blocks m within the super-block set C in the super-block group G contain a lesser amount of valid data than the other super-blocks within the super-block group G; and a total amount of valid data within the super-block set C are just enough to complete an entire super-block; selecting the super-block group G that includes the super-block set C with the maximum number of super-blocks m; and performing garbage collection on the super-block set C in the selected super-block group G.Type: ApplicationFiled: October 3, 2019Publication date: April 9, 2020Inventors: Qi Wu, Tong Zhang
-
Publication number: 20200110698Abstract: According to one embodiment, a memory system manages a plurality of management tables corresponding to a plurality of first blocks in a nonvolatile memory. Each management table includes a plurality of reference counts corresponding to a plurality of data in a corresponding first block. The memory system copies a set of data included in a copy-source block for garbage collection and corresponding respectively to reference counts belonging to a first reference count range to a first copy-destination block, and copies a set of data included in the copy-source block and corresponding respectively to reference counts belonging to a second reference count range having a lower limit higher than an upper limit of the first reference count range to a second copy-destination block.Type: ApplicationFiled: December 5, 2019Publication date: April 9, 2020Inventors: Shinichi Kanno, Naoki Esaka
-
Publication number: 20200110699Abstract: Techniques are disclosed to operate binary objects across private address spaces. In various embodiments, a private shared memory segment is allocated for two non-privileged address spaces, the first comprising a home address space and the second comprising a target address space. One or more executable modules are loaded in a private address space of the home address space. One or more program call routines and an environment to schedule system request blocks (SRB) are built in the home address space. The environment to schedule system request blocks is configured to be used to schedule an SRB into the target address space, the SRB comprising information configured to cause the target address space to cause an associated one of the executable modules to execute.Type: ApplicationFiled: December 6, 2019Publication date: April 9, 2020Inventors: Reza FATEMI, John DRIVER
-
Publication number: 20200110700Abstract: Techniques related to failover to the secondary storage server from a primary storage server of a database server without degrading the performance of servicing storage requests for client applications are provided. In an embodiment, the secondary storage server receives, from the database server, an eviction notification indicating that a set of data blocks has been evicted from a cache. The secondary storage server's memory hierarchy includes a secondary cache and a secondary persistent storage that stores a second copy of the set of data blocks. The secondary storage server persistently stores a copy of data, which is also persistently stored on the primary storage server, which includes a first copy of the set of data blocks.Type: ApplicationFiled: October 5, 2018Publication date: April 9, 2020Inventors: JIA SHI, WEI ZHANG, VIJAYAKRISHNAN NAGARAJAN, SHIH-YU HUANG, KOTHANDA UMAMAGESWARAN
-
Publication number: 20200110701Abstract: A storage device and a cache area addressing method is disclosed. The storage device includes a memory module, a buffer, a memory controller, and a cache area addressing circuit. The buffer includes a cache area. The memory controller is coupled to the memory module and the buffer. The cache area addressing circuit is coupled to the memory controller and the buffer and configured to perform the followings. A logical address from the memory controller is received. Whether the logical address corresponds to a logical address interval of the cache area is determined. When the logical address corresponds to the logical address interval of the cache area, the logical address is mapped to a first physical address in the cache area according to a base address. Otherwise, the logical address is mapped to a second physical address in the buffer.Type: ApplicationFiled: August 9, 2019Publication date: April 9, 2020Applicant: SILICON MOTION, INC.Inventor: Yi-Shou Jhang
-
Publication number: 20200110702Abstract: A computer implemented method to operate different processor cache levels of a cache hierarchy for a processor with pipelined execution is suggested. The cache hierarchy comprises at least a lower hierarchy level entity and a higher hierarchy level entity. The method comprises: sending a fetch request to the cache hierarchy; detecting a miss event from a lower hierarchy level entity; sending a fetch request to a higher hierarchy level entity; and scheduling at least one write pass.Type: ApplicationFiled: December 5, 2019Publication date: April 9, 2020Inventors: Simon H. Friedmann, Christian Jacobi, Markus Kaltenbach, Ulrich Mayer, Anthony Saporito
-
Publication number: 20200110703Abstract: Methods and systems for self-invalidating cachelines in a computer system having a plurality of cores are described. A first one of the plurality of cores, requests to load a memory block from a cache memory local to the first one of the plurality of cores, which request results in a cache miss. This results in checking a read-after-write detection structure to determine if a race condition exists for the memory block. If a race condition exists for the memory block, program order is enforced by the first one of the plurality of cores at least between any older loads and any younger loads with respect to the load that detects the prior store in the first one of the plurality of cores that issued the load of the memory block and causing one or more cache lines in the local cache memory to be self-invalidated.Type: ApplicationFiled: December 11, 2019Publication date: April 9, 2020Inventors: Alberto ROS, Stefanos KAXIRAS
-
Publication number: 20200110704Abstract: An information handling system (IHS) includes a processor with a cache memory system. The processor includes a processor core with an L1 cache memory that couples to an L2 cache memory. The processor includes an arbitration mechanism that controls load and store requests to the L2 cache memory. The arbitration mechanism includes control logic that enables a load request to interrupt a store request that the L2 cache memory is currently servicing. When the L2 cache memory finishes servicing the interrupting load request, the L2 cache memory may return to servicing the interrupted store request at the point of interruption.Type: ApplicationFiled: November 22, 2019Publication date: April 9, 2020Inventors: Sanjeev Ghai, Guy L. Guthrie, Stephen J. Powell, William J. Starke
-
Publication number: 20200110705Abstract: A memory device includes a memory cell array, an information register and a prefetch circuit. The memory cell array stores a valid data array, a base array and a target data array, where the valid data array includes valid elements among elements of first data, the base array includes position elements indicating position values corresponding to the valid elements and the target data array includes target elements of second data corresponding to the position values. The information register stores indirect memory access information including a start address of the target data array and a unit size of the target elements. The prefetch circuit prefetches, based on the indirect memory access information, the target elements corresponding to the position elements that are read from the memory cell array.Type: ApplicationFiled: May 8, 2019Publication date: April 9, 2020Inventors: IN-SOON JO, Young-Geun Choi, Seung-Yeun Jeong
-
Publication number: 20200110706Abstract: A memory system includes a memory and a memory controller. The memory includes first and second parallel operation elements, each including a plurality of first and second storage regions, respectively, and first and second buffers, respectively. The memory controller performs operations on the memory based on first and second group information. The first group information defines first groups, each first group including one first storage region and one second storage region, and each second group including at least two first groups. The memory controller, in response to a host command targeting a first storage region, (i) acquires first data from the first buffer, and thereafter (ii) causes the memory to read out second data to the first buffer. The first storage region storing the first data and the second storage region storing the second data belong to different first groups and to the same second group.Type: ApplicationFiled: December 5, 2019Publication date: April 9, 2020Inventors: Hirokazu TAKEUCHI, Takahiro MIOMO, Hiroyuki YAMAGUCHI, Hajime YAMAZAKI
-
Publication number: 20200110707Abstract: A block-based storage system may implement page cache write logging. Write requests for a data volume maintained at a storage node may be received at a storage node. A page cache for may be updated in accordance with the request. A log record describing the page cache update may be stored in a page cache write log maintained in a persistent storage device. Once the write request is performed in the page cache and recorded in a log record in the page cache write log, the write request may be acknowledged. Upon recovery from a system failure where data in the page cache is lost, log records in the page cache write log may be replayed to restore to the page cache a state of the page cache prior to the system failure.Type: ApplicationFiled: December 6, 2019Publication date: April 9, 2020Applicant: Amazon Technologies, Inc.Inventors: Danny Wei, John Luther Guthrie, II, James Michael Thompson, Benjamin Arthur Hawks, Norbert P. Kusters
-
Publication number: 20200110708Abstract: A storage device includes a nonvolatile memory device, a memory controller, and a buffer memory. The memory controller determines a first memory block of the nonvolatile memory device, which is targeted for a read reclaim operation, and reads target data from a target area of the first memory block. The target data are stored in the buffer memory. The memory controller reads at least a portion of the target data stored in the buffer memory in response to a read request corresponding to at least a portion of the target area.Type: ApplicationFiled: July 9, 2019Publication date: April 9, 2020Inventors: JIN-HEE MA, SUKHEE LEE, JISOO KIM
-
Publication number: 20200110709Abstract: The present disclosure describes logical to physical tables that are configured to provide multiple sector support and provide for help in processing of data when a sector is mapped or unmapped. In the cases where sectors are unmapped, the present disclosure provides mechanisms to concurrently support multiple unique unmapped data patterns depending upon the specific type of unmapped sector.Type: ApplicationFiled: June 24, 2019Publication date: April 9, 2020Inventors: Cory LAPPI, William Jared WALKER, Darin Edward GERHART
-
Publication number: 20200110710Abstract: An apparatus for bypassing an invalidate search of a lookaside buffer includes a filter circuit that directs an invalidate command to a LPID/PID filter of an MMU of a processor and searches for an identifier targeted by the invalidate command. The MMU is external to cores of the processor. The apparatus includes an LPID/PID miss circuit that bypasses searching the lookaside buffer for addresses targeted by the invalidate command and returns a notification that the invalidate command did not identify the identifier targeted by the invalidate command in response to the filter circuit determining that the identifier targeted by the invalidate command is not stored in the LPID/PID filter.Type: ApplicationFiled: October 4, 2018Publication date: April 9, 2020Inventors: Jake Truelove, Ronald Kalla, Jody Joyner, Benjamin HERRENSCHMIDT, David A. Larson Stanton
-
Publication number: 20200110711Abstract: The present disclosure relates to a method of operating a translation lookaside buffer (TLB) arrangement for a processor supporting virtual addressing, wherein multiple translation engines are used to perform translations on request of one of a plurality of dedicated processor units. The method comprises: maintaining by a cache unit a dependency matrix for the engines to track for each processing unit if an engine is assigned to the each processing unit for a table walk. The cache unit may block a processing unit from allocating an engine to a translation request when the engine is already assigned to the processing unit in the dependency matrix.Type: ApplicationFiled: December 9, 2019Publication date: April 9, 2020Inventors: Michael Johannes Jaspers, Markus Kaltenbach, Girish G. Kurup, Ulrich Mayer
-
Publication number: 20200110712Abstract: Technology for decrypting and using a security module in a processor cache in a secure mode such that dynamic address translation prevents access to portions of the volatile memory outside of a secret store in a volatile memory.Type: ApplicationFiled: November 27, 2019Publication date: April 9, 2020Inventors: Angel Nunez Mencias, Jakob C. Lang, Martin Recktenwald, Ulrich Mayer
-
Publication number: 20200110713Abstract: In a non-volatile memory of a microcontroller, first information representative of a value selected among at least four values is stored. Furthermore, for each of a plurality of areas of the memory, second information representative of a type selected among two types is also stored. Access to each of the areas is conditioned according to the selected value and to the type of the area.Type: ApplicationFiled: October 7, 2019Publication date: April 9, 2020Applicants: STMicroelectronics (Rousset) SAS, STMicroelectronics (Grenoble 2) SASInventors: Layachi DAINECHE, Xavier CHBANI, Nadia VAN-DEN-BOSSCHE
-
Publication number: 20200110714Abstract: Methods, systems, and devices for dynamically configuring transmission lines of a bus between two electronic devices (e.g., a controller and memory device) are described. A first device may determine a quantity of bits (e.g., data bits, control bits) to be communicated with a second device over a data bus. The first device may partition the data bus into a first set of transmission lines (e.g., based on the quantity of data bits) and a second set of transmission lines (e.g., based on the quantity of control bits). The first device may communicate the quantity of data bits over the first set of transmission lines and communicate the quantity of control bits over the second set of transmission lines. In some cases, the first device may repartition the data bus based on different quantities of data bits and control bits to be communicated with the second device at a different time.Type: ApplicationFiled: September 23, 2019Publication date: April 9, 2020Inventors: Michael Dieter Richter, Thomas Hein, Martin Brox, Peter Mayer, Wolfgang Anton Spirkl
-
Publication number: 20200110715Abstract: System and method for improved transferring of data involving memory device systems. A memory appliance (MA) comprising a plurality of memory modules is configured to store data within the plurality of memory modules and further configured to receive data commands from the first server and a second server coupled to the MA. The data commands may include direction memory access commands such that the MA can service the data commands while bypassing a host controller of the MA.Type: ApplicationFiled: October 7, 2019Publication date: April 9, 2020Inventors: Vlad FRUCHTER, Keith LOWERY, George Michael UHLER, Steven WOO, Chi-Ming (Philip) YEUNG, Ronald LEE
-
Publication number: 20200110716Abstract: Techniques for command bus training to a memory device includes triggering a memory device to enter a first or a second command bus training mode, outputting a command/address (CA) pattern via a command bus and compressing a sampled CA pattern returned from the memory device based on whether the memory device was triggered to be in the first or the second command bus training mode.Type: ApplicationFiled: December 10, 2019Publication date: April 9, 2020Inventors: Christopher P. MOZAK, Steven T. TAYLOR, Alvin Shing Chye GOH
-
Publication number: 20200110717Abstract: Techniques for transmitting data may comprise: receiving a first data transfer rate indicating a communication rate at which a first entity communicates with a second entity over a communications fabric; receiving a second data transfer rate indicating a communication rate at which the second entity communicates with the first entity over the communications fabric; and performing first processing to send first data from the first entity to the second entity over the communications fabric, said first processing including: determining whether the first data transfer rate is greater than the second data transfer rate; and responsive to determining the first data transfer rate is greater than the second transfer rate, performing second processing by the first entity that controls and limits, in accordance with the second data transfer rate, a rate at which the first data is transmitted from the first entity to the second entity.Type: ApplicationFiled: October 9, 2018Publication date: April 9, 2020Applicant: EMC IP Holding Company LLCInventors: Deepak Vokaliga, Subin George, Arieh Don
-
Publication number: 20200110718Abstract: In example implementations, an apparatus for detecting hardware components is provided. The apparatus includes a multipurpose integrated circuit comprising an input pin, a hardware component coupled to the input pin and a two-way communication bus coupled to the multipurpose integrated circuit. The multipurpose integrated circuit is to receive an interrogation signal from a processor for the hardware component coupled to the pin via the two-way communication bus. A response signal that indicates that the hardware component is detected on the pin is generated in response to the interrogation signal. The response signal is then transmitted to the processor over the two-way communication bus.Type: ApplicationFiled: June 21, 2017Publication date: April 9, 2020Inventors: Christopher RIJKEN, Chih Liang LI, Ronald E. DELUGA
-
Publication number: 20200110719Abstract: A control method for a host device includes assigning a first detection command and a first identification number to a first slave device; receiving first response information generated by the first slave device to determine the first function number of the first slave device; and determining whether the first slave device is cascaded to a second slave device. When the first slave device is not cascaded to the second slave device, the host device performs a first specific action according to the first function number, or it directs the first slave device to perform a first specific action. When the first slave device is cascaded to the second slave device, the host device assigns a second detection command and a second identification number to the second slave device and receives second response information generated by the second slave device.Type: ApplicationFiled: June 7, 2019Publication date: April 9, 2020Inventor: Sheng-Tsai CHANG
-
Publication number: 20200110720Abstract: This disclosure generally relates to USB TYPE-C, and, in particular, DISPLAYPORT Alternate Mode communication in a USB TYPE-C environment. In one embodiment, a device determines a DISPLAYPORT mode and determines an orientation of a USB TYPE-C connector plug. A multiplexer multiplexes a DISPLAYPORT transmission based in part on the determined orientation of the USB TYPE-C connector plug.Type: ApplicationFiled: December 10, 2019Publication date: April 9, 2020Inventors: Mark Edward Wentroble, Suzanne Mary Vining, Hassan Omar Ali
-
Publication number: 20200110721Abstract: Methods and apparatus relating to techniques for avoiding cache lookup for cold cache. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to monitor a thread switching overhead parameter for an application executing in a processing system and in response to a determination that the thread switching overhead parameter exceeds a threshold, to activate a thread management algorithm to reduce thread switching in the processing system. Other embodiments are also disclosed and claimed.Type: ApplicationFiled: October 11, 2019Publication date: April 9, 2020Applicant: Intel CorporationInventors: Abhishek R. Appu, Altug Koker, Joydeep Ray, Kiran C. Veernapu, Balaji Vembu, Vasanth Ranganathan, Prasoonkumar Surti
-
Publication number: 20200110722Abstract: An exemplary embodiment extended peripheral component interconnect express (PCIe) device includes a host PCIe fabric comprising a host root complex. The host PCIe fabric has a first set of bus numbers and a first memory mapped input/output (MMIO) space on a host CPU. An extended PCIe fabric includes a root complex endpoint (RCEP) as part of an endpoint of the host PCIe fabric. The extended PCIe fabric has a second set of bus numbers and a second MMIO space separate from the first set of bus numbers and the first MMIO space, respectively.Type: ApplicationFiled: August 13, 2019Publication date: April 9, 2020Inventor: Wesley Shao
-
Publication number: 20200110723Abstract: The first information processing apparatus is configured to detect the removal device attached to the removal device interface, and send, to the second information processing apparatus, a mount request to mount the removal device, the second information processing apparatus is configured to receive the mount request from the first information processing apparatus, mount, on the second information processing apparatus, the removal device attached to the removal device interface of the first information processing apparatus, and send a mount point identifier to the first information processing apparatus, the mount point identifier being an identifier indicating a mount point of the removal device mounted on the second information processing apparatus, and the first information processing apparatus is further configured to receive the mount point identifier from the second information processing apparatus, and mount the mount point of the second information processing apparatus indicated by the mount point identType: ApplicationFiled: October 1, 2019Publication date: April 9, 2020Inventor: KENICHIRO NITTA
-
Publication number: 20200110724Abstract: A USB detecting method for use with a controlling and processing unit, a USB input/output port, at least one switch and at least one USB peripheral device is provided. The at least one switch is arranged between the controlling and processing unit and the USB input/output port and/or arranged between the USB input/output port and the at least one USB peripheral device that is electrically connected with the USB input/output port. Firstly, a USB detection signal of the at least one USB peripheral device is provided to the controlling and processing unit. According to a result of receiving the USB detection signal, the controlling and processing unit determines whether the switch is reset. For resetting the at least one switch, the controlling and processing unit simulates the action of plugging and pulling out the USB peripheral device.Type: ApplicationFiled: November 8, 2018Publication date: April 9, 2020Inventors: YUN-PING LIU, HSIAO-HUI LEE, SHUEI-JIN TSAI
-
Publication number: 20200110725Abstract: A general input/output communication port implements a communication stack that includes a physical layer, a data link layer and a transaction layer. The transaction layer includes assembling a packet header for a message request transaction to one or more logical devices. The packet header includes a format field to indicate the length of the packet header and to further specify whether the packet header includes a data payload, a subset of a type field to indicate the packet header relates to the message request transaction and a message field. The message field includes a message to implement the message request transaction. The message includes at least one message that is selected from a group of messages.Type: ApplicationFiled: July 22, 2019Publication date: April 9, 2020Applicant: Intel CorporationInventors: David J. Harriman, Jasmin Ajanovic
-
Publication number: 20200110726Abstract: Methods, apparatus, and systems, for transporting data units comprising multiple pieces of transaction data over high-speed interconnects. A flow control unit, called a KTI (Keizer Technology Interface) Flit, is implemented in a coherent multi-layer protocol supporting coherent memory transactions. The KTI Flit has a basic format that supports use of configurable fields to implement KTI Flits with specific formats that may be used for corresponding transactions. In one aspect, the KTI Flit may be formatted as multiple slots used to support transfer of multiple respective pieces of transaction data in a single Flit. The KTI Flit can also be configured to support various types of transactions and multiple KTI Flits may be combined into packets to support transfer of data such as cache line transfers.Type: ApplicationFiled: December 9, 2019Publication date: April 9, 2020Applicant: Intel CorporationInventors: Robert J. Safranek, Robert G. Blankenship, Debendra Das Sharma
-
Publication number: 20200110727Abstract: A method for transmitting and updating program data in a highly compact format using recursive encoding, wherein data is deconstructed into chunklets, and is processed through a series of reference code libraries that reduce the data to a sequence of reference codes, and where the output of each reference library is used as the input to the next.Type: ApplicationFiled: December 16, 2019Publication date: April 9, 2020Inventors: Joshua Cooper, Aliasghar Riahi, Mojgan Haddad, Ryan Kourosh Riahi, Razmin Riahi, Charles Yeomans
-
Publication number: 20200110728Abstract: Technologies are shown for storing a data object in a distributed application architecture. Critical data in the data object is stored in an object data block on a blockchain. Noncritical data elements in the data object are stored in data block files at an address on a distributed file system, where the addresses are stored in the data block. The object data block is retrieved from the blockchain. The noncritical elements are retrieved from the file system using the address in the data block. The critical and noncritical elements are combined into a reassembled data object. The critical and noncritical data elements can be differentiated in a data definition for the data object or algorithmically analyzing data in the data object. Metadata for the data object can be stored in the object data block and utilized to combine the critical and noncritical elements into the reassembled data object.Type: ApplicationFiled: April 26, 2019Publication date: April 9, 2020Inventors: Dmytro SEMENOV, Mahesh Kumar DATHRIKA, Michael RAWLINGS, Dylan Nelson Jamie PIERCEY
-
Publication number: 20200110729Abstract: Example implementations relate to identifying file system objects of a file system for generating an initial baseline of the file system. In an example, an inode table of the file system is retrieved. Modes included in the inode table correspond respectively to file system objects of the file system. Attributes, including an object identifier and a time attribute, are extracted from each of the inodes of the inode table. A compilation of the object identifiers from the extracted attributes are provided to a service that generates the initial baseline of the file system using the compilation.Type: ApplicationFiled: July 11, 2019Publication date: April 9, 2020Inventors: Priya Pappanaickenpalayam Muthuswamy, Rajkumar Kannan, Anoop Kumar Raveendran
-
Publication number: 20200110730Abstract: A database management system for controlling prioritized transactions, comprising: a processor adapted to: receive from a client module a request to write into a database item as part of a high-priority transaction; check a lock status and an injection status of the database item; when the lock status of the database item includes a lock owned by a low-priority transaction and the injection status is not-injected status: change the injection status of the database item to injected status; copy current content of the database item to an undo buffer of the low-priority transaction; and write into a storage engine of the database item.Type: ApplicationFiled: December 9, 2019Publication date: April 9, 2020Inventor: David DOMINGUEZ
-
Publication number: 20200110731Abstract: A system and method of use resolves the frustration of repeated manual work during schema mapping. The system utilizes a transformation graph—a collection of nodes (unified attributes) and edges (transformations) in which source attributes are mapped and transformed. The system further leverages existing mappings and transformations for the purpose of suggesting to a user the optimal paths (i.e., the lowest cost paths) for mapping new sources, which is particularly useful when new sources share similarity with previously mapped sources and require the same transformations. As such, the system also promotes an evolving schema by allowing users to select which unified attributes they want to include in a target schema at any time. The system addresses the technical challenge of finding optimal transformation paths and how to present these to the user for evaluation.Type: ApplicationFiled: December 5, 2019Publication date: April 9, 2020Inventors: Sharon ROTH, Ihab F. ILYAS, Daniel Meir BRUCKNER, Gideon GOLDIN
-
Publication number: 20200110732Abstract: Systems and methods are provided to combine data from various internal data sources and external data sources to generate and maintain frequently asked questions and knowledge bases. By doing so, the collective knowledge of a company and customers' questions around it are aggregated and served in a way that increases the availability and relevancy of such knowledge for both self-help channels and computing agents (e.g., customer service representatives (CSRs)).Type: ApplicationFiled: August 19, 2019Publication date: April 9, 2020Inventor: Ian Beaver
-
Publication number: 20200110733Abstract: A method and apparatus for criterion-based retention of data object versions are disclosed. In the method and apparatus, a plurality of keys are sorted in accordance with an ordering scheme, whereby a key of the plurality of keys has an associated version of a data object and a timestamp. The key is inspected in accordance with the ordering scheme to determine based at least in part on the timestamp whether a criterion for performing an action on the associated version of the data object is satisfied. If the criterion is satisfied, a marker key is added to the plurality of keys, whereby the marker key precedes the inspected key according to the ordering scheme and indicates that the criterion is satisfied.Type: ApplicationFiled: December 9, 2019Publication date: April 9, 2020Inventors: Praveen Kumar Gattu, Aykud Gonen, Jonathan Jorge Nadal, Abhilasha Seth, Joseph Thomas Selman
-
Publication number: 20200110734Abstract: Techniques associated with data management and distribution are described, including receiving at a content distribution and management system activity data associated with a user from a client, the client having an interface configured to display commercial content and a player configured to access the content distribution and management system, storing the activity data in a database, displaying the commercial content using the interface, receiving other activity data associated with the user from the client, storing the other activity data in the database, determining other commercial content to display using the activity data and the other activity data, and displaying the other commercial content.Type: ApplicationFiled: August 19, 2019Publication date: April 9, 2020Applicant: 1776 Media Network, Inc.Inventor: Michael Joseph Lourdeaux