Patents Issued in January 18, 2018
  • Publication number: 20180018249
    Abstract: Systems and methods for testing mobile device s are disclosed. In an embodiment, a mobile device testing system comprises a non-transitory memory and one or more hardware processors coupled to the non-transitory memory. The one or more hardware processors execute instructions to perform operations that include receiving a command XML sheet and rendering the command XML sheet into a user interface application, wherein the command XML sheet comprises elements corresponding to one or more commands for testing a mobile device.
    Type: Application
    Filed: August 17, 2016
    Publication date: January 18, 2018
    Inventor: Wenheng Zhao
  • Publication number: 20180018250
    Abstract: Embodiments relate to pre-silicon device testing using a persistent command table. An aspect includes receiving a value for a persistent command parameter from a user. Another aspect includes determining whether the value of the persistent command parameter is greater than zero. Another aspect includes based on determining whether the value of the persistent command parameter is greater than zero, selecting a number of commands equal to the value of the persistent command parameter from a regular command table of a driver of a device under test. Another aspect includes adding the selected commands to the persistent command table of the driver. Another aspect includes performing testing of the device under test via the driver using only commands that are in the persistent command table of the driver.
    Type: Application
    Filed: October 4, 2017
    Publication date: January 18, 2018
    Inventors: Dean G. Bair, Rebecca M. Gott, Edward J. Kaminski, JR., William J. Lewis, Chakrapani Rayadurgam
  • Publication number: 20180018251
    Abstract: A method and system for displaying application performance data. In an embodiment, performance data is collected from an application during display by the application of a first display window. Performance data is also collected from the application during display by the application of a second display window. On a display device, an image of the first display window is displayed that includes a first displayable performance indicator that is visually modifiable to correlate to variations in performance data collected from the application. On the display device, an image of the second display window is simultaneously displayed with the image of the first display window. The simultaneously displayed image of the second display window includes a second displayable performance indicator that is visually modifiable to correlate to variations in performance data collected from the application.
    Type: Application
    Filed: September 15, 2017
    Publication date: January 18, 2018
    Inventors: Jonathan B. Lindo, Seshadri Venkataraman, Vamsee K. Lakamsani, Harshit Bapna
  • Publication number: 20180018252
    Abstract: A tracking system for tracking and reporting content usage activity on computing devices is provided. The tracking system embodies a method determining overall efficiency and efficacy by measuring active usage time when the user is actually engaging or looking at the content. The method enables administrators to track and categorize any web site or application and run resulting ROI reports comparing quantified actual usage versus the recommended usage. Thereby the present invention may alert device users whose usage activity, as measured by the time usage on categorized sites and applications, deviates from recommended usage or predetermined thresholds.
    Type: Application
    Filed: June 20, 2017
    Publication date: January 18, 2018
    Inventors: Tammy Ann Farrell, Dulcina Bettencourt Belcher
  • Publication number: 20180018253
    Abstract: Systems, methods, and software can be used to automate software verifications. In some aspects, one or more application program interface (API) call pairs are generated based on a source code of a user module that invokes an API. Each of the one or more API call pairs comprises a first API call that invokes the API followed by a second API call that invokes the API. One or more fragments are generated based on the one or more API calls pairs. Each of the one or more fragments represents an execution sequence that includes at least one of the one or more API call pairs. The one or more fragments are verified.
    Type: Application
    Filed: July 13, 2017
    Publication date: January 18, 2018
    Applicant: BlackBerry Limited
    Inventors: Andrew James MALTON, Daniel Lewis NEVILLE
  • Publication number: 20180018254
    Abstract: Generating accessibility suggestions for segments of a web page. A web page is segmented into constituent portions and an accessibility of each portion is determined with suggestions for accessibility improvement.
    Type: Application
    Filed: July 18, 2016
    Publication date: January 18, 2018
    Inventors: Nidhi Bansal, Mudit Mehrotra
  • Publication number: 20180018255
    Abstract: The method includes identifying a test report log for a regression test. The method further includes identifying one or more errors in the identified test report log. The method further includes determining a severity category for the one or more identified errors in the identified test report log. The method further includes determining a severity category for the identified test report log based on the determined severity category for the one or more identified errors in the identified test report log.
    Type: Application
    Filed: October 9, 2017
    Publication date: January 18, 2018
    Inventors: Christopher L. Brealey, Shinoj Zacharias
  • Publication number: 20180018256
    Abstract: The method includes identifying a test report log for a regression test. The method further includes identifying one or more errors in the identified test report log. The method further includes determining a severity category for the one or more identified errors in the identified test report log. The method further includes determining a severity category for the identified test report log based on the determined severity category for the one or more identified errors in the identified test report log.
    Type: Application
    Filed: October 9, 2017
    Publication date: January 18, 2018
    Inventors: Christopher L. Brealey, Shinoj Zacharias
  • Publication number: 20180018257
    Abstract: To provide a search device with less memory consumption, the search device includes a first associative memory searched with a first search key, a second associative memory searched with a second search key, a concatenated search data generating unit that generates first search information based on hit information including multiple hits in the first associative memory, and a search key generating unit that includes a first key generating unit generating a portion of search data as the first search key and a second search key generating unit generating the first search information and another portion of the search data as the second search key.
    Type: Application
    Filed: May 9, 2017
    Publication date: January 18, 2018
    Inventor: Takeo MIKI
  • Publication number: 20180018258
    Abstract: Examples relate to ordering updates for nonvolatile memory accesses. In some examples, a first update that is propagated from a write-through processor cache of a processor is received by a write ordering buffer, where the first update is associated with a first epoch. The first update is stored in a first buffer entry of the write ordering buffer. At this stage, a second update that is propagated from the write-through processor cache is received, where the second update is associated with a second epoch. A second buffer entry of the write ordering buffer is allocated to store the second update. The first buffer entry and the second buffer entry can then be evicted to non-volatile memory in epoch order.
    Type: Application
    Filed: January 30, 2015
    Publication date: January 18, 2018
    Inventors: Sanketh Nalli, Haris Volos, Kimberly Keeton
  • Publication number: 20180018259
    Abstract: A method and a storage system are provided for implementing enhanced solid state storage class memory (eSCM) including a direct attached dual in line memory (DIMM) card containing Dynamic Random Access Memory (DRAM), and at least one 5 non-volatile memory, for example, Phase Change Memory (PCM), Resistive RAM (ReRAM), Spin-Transfer-Torque RAM (STT-RAM), and NAND Flash chips. An eSCM processor controls selectively allocating data among the DRAM, and the at least one non-volatile memory primarily based upon a data set size.
    Type: Application
    Filed: September 26, 2017
    Publication date: January 18, 2018
    Inventors: Frank R. CHU, Luiz M. FRANCA-NETO, Timothy K. TSAI, Qingbo WANG
  • Publication number: 20180018260
    Abstract: An information processing device according to an embodiment includes a memory and a mediation unit. The memory includes memory use areas that are allocated to respective tasks, and an identification-information area that identifies the tasks. The mediation unit mediates writing and reading, by one of the tasks, into and from one of the memory use areas. When accepting a request of the writing and reading from the one task, the mediation unit writes one of the identification informations corresponding to the one task into the identification-information area, further reads information memorized in the identification-information area at a predetermined timing, and detects an abnormality in the memory on the basis of the read information.
    Type: Application
    Filed: June 20, 2017
    Publication date: January 18, 2018
    Applicant: FUJITSU TEN LIMITED
    Inventor: Masaru SUGIHARA
  • Publication number: 20180018261
    Abstract: A coordinating node acts as a write back cache, isolating local cache storage endpoints from latencies associated with accessing geographically remote cloud cache and storage resources.
    Type: Application
    Filed: June 19, 2017
    Publication date: January 18, 2018
    Inventors: Lazarus Vekiarides, Daniel Suman, Janice Ann Lacy
  • Publication number: 20180018262
    Abstract: A processor includes a performance monitor that logs reservation losses, and additionally logs reasons for the reservation losses. By logging reasons for the reservation losses, the performance monitor provides data that can be used to determine whether the reservation losses were due to valid programming, such as two threads competing for the same lock, or whether the reservation losses were due to bad programming. When the reservation losses are due to bad programming, the information can be used to improve the programming to obtain better performance.
    Type: Application
    Filed: July 15, 2016
    Publication date: January 18, 2018
    Inventors: Shakti Kapoor, John A. Schumann, Karen E. Yokum
  • Publication number: 20180018263
    Abstract: An electronic device is provided. An electronic device according to an implementation of the disclosed technology is an electronic device including a semiconductor memory, wherein the semiconductor memory includes: a substrate including a first region in which a plurality of memory cells are disposed and a second region adjacent to the first region; a first interlayer insulating layer disposed over the substrate; a plurality of first memory cells penetrating through the first interlayer insulating layer in the first region, an uppermost portion of each memory cell of the first memory cells having a first conductive carbon-containing pattern; and a first insulating carbon-containing pattern located over the first interlayer insulating layer in the second region.
    Type: Application
    Filed: March 14, 2017
    Publication date: January 18, 2018
    Inventors: Jong-Young CHO, Eung-Rim HWANG, In-Hoe KIM, Young-Min NA, Gwang-Won LEE
  • Publication number: 20180018264
    Abstract: A processing system indicates the pendency of a memory access request for data at the cache entry that is assigned to store the data in response to the memory access request. While executing instructions, the processor issues requests for data to the cache most proximal to the processor. In response to a cache miss, the cache controller identifies an entry of the cache to store the data in response to the memory access request, and stores an indication that the memory access request is pending at the identified cache entry. If the cache controller receives a subsequent memory access request for the data while the memory access request is pending at the higher level of the memory hierarchy, the cache controller identifies that the memory access request is pending based on the indicator stored at the entry.
    Type: Application
    Filed: July 15, 2016
    Publication date: January 18, 2018
    Inventor: Paul James Moyer
  • Publication number: 20180018265
    Abstract: A method for increasing storage space in a system containing a block data storage device, a memory, and a processor is provided. Generally, the processor is configured by the memory to tag metadata of a data block of the block storage device indicating the block as free, used, or semifree. The free tag indicates the data block is available to the system for storing data when needed, the used tag indicates the data block contains application data, and the semifree tag indicates the data block contains cache data and is available to the system for storing application data type if no blocks marked with the free tag are available to the system.
    Type: Application
    Filed: September 26, 2017
    Publication date: January 18, 2018
    Inventors: Derry Shribman, Ofer Vilenski
  • Publication number: 20180018266
    Abstract: A processing system includes a cache and a prefetcher to prefetch lines from a memory into the cache. The prefetcher receives a memory access request to a first address in the memory and sets a stride length associated with an instruction that issued the memory access request to a length of a line in the cache. The stride length indicates a number of bytes between addresses of lines that are prefetched from the memory into the cache.
    Type: Application
    Filed: July 18, 2016
    Publication date: January 18, 2018
    Inventor: William Evan Jones, III
  • Publication number: 20180018267
    Abstract: A speculative read request is received from a host device over a buffered memory access link for data associated with a particular address. A read request is sent for the data to a memory device. The data is received from the memory device in response to the read request and the received data is sent to the host device as a response to a demand read request received subsequent to the speculative read request.
    Type: Application
    Filed: May 23, 2017
    Publication date: January 18, 2018
    Applicant: Intel Corporation
    Inventors: Brian S. Morris, Bill Nale, Robert G. Blankenship, Yen-Cheng Liu
  • Publication number: 20180018268
    Abstract: Providing memory bandwidth compression using multiple last-level cache (LLC) lines in a central processing unit (CPU)-based system is disclosed. In some aspects, a compressed memory controller (CMC) provides an LLC comprising multiple LLC lines, each providing a plurality of sub-lines the same size as a system cache line. The contents of the system cache line(s) stored within a single LLC line are compressed and stored in system memory within the memory sub-line region corresponding to the LLC line. A master table stores information indicating how the compressed data for an LLC line is stored in system memory by storing an offset value and a length value for each sub-line within each LLC line. By compressing multiple system cache lines together and storing compressed data in a space normally allocated to multiple uncompressed system lines, the CMC enables compression sizes to be smaller than the memory read/write granularity of the system memory.
    Type: Application
    Filed: September 28, 2017
    Publication date: January 18, 2018
    Inventors: Colin Beaton Verrilli, Mattheus Cornelis Antonius Adrianus Heddes, Mark Anthony Rinaldi, Natarajan Vaidhyanathan
  • Publication number: 20180018269
    Abstract: A hybrid data storage device disclosed herein includes a main data store, one or more data storage caches, and a data storage cache management sub-system. The hybrid data storage device is configured to limit write operations on the one or more data storage caches to less than an endurance value for the data storage cache. In one implementation, the data storage cache management sub-system limits or denies requests for promotion of data from the main data store to the one or more data storage caches. In another implementation, the data storage cache management sub-system limits garbage collection operations on the data storage cache.
    Type: Application
    Filed: July 13, 2016
    Publication date: January 18, 2018
    Inventors: Sumanth Jannyavula Venkata, Mark A. Gaertner, Jonathan G. Backman
  • Publication number: 20180018270
    Abstract: Expandable cache management dynamically manages cache storage for multiple network shares configured in a file server. Once a file is written to a directory or folder on a specially designated network share, such as one that is configured for “infinite backup,” an intermediary pre-backup copy of the file is created in an expandable cache in the file server that hosts the network share. On write operations, cache storage space can be dynamically expanded or freed up by pruning previously backed up data. This advantageously creates flexible storage caches in the file server for each network share, each cache managed independently of other like caches for other network shares on the same file server. On read operations, intermediary file storage in the expandable cache gives client computing devices speedy access to data targeted for backup, which is generally quicker than restoring files from backed up secondary copies.
    Type: Application
    Filed: May 22, 2017
    Publication date: January 18, 2018
    Inventors: Satish Chandra KILARU, Rajiv KOTTOMTHARAYIL, Paramasivam KUMARASAMY, William KATCHER
  • Publication number: 20180018271
    Abstract: A cache stores, along with data that is being transferred from a higher level cache to a lower level cache, information indicating the higher level cache location from which the data was transferred. Upon receiving a request for data that is stored at the location in the higher level cache, a cache controller stores the higher level cache location information in a status tag of the data. The cache controller then transfers the data with the status tag indicating the higher level cache location to a lower level cache. When the data is subsequently updated or evicted from the lower level cache, the cache controller reads the status tag location information and transfers the data back to the location in the higher level cache from which it was originally transferred.
    Type: Application
    Filed: July 14, 2016
    Publication date: January 18, 2018
    Inventor: Paul James Moyer
  • Publication number: 20180018272
    Abstract: The processor provides a host computer with a logical volume based on a physical storage device. Based on a command from the host computer, the control device writes, into a memory, address information that associates a logical address in the logical volume with a device address in the physical storage device. The control device receives a command from the host computer and if it is determined that the command is a read command, identifies a first logical address designated by the command and determines whether or not the first logical address is included in the address information. If the first address is included in the address information, the control device specifies a first device address corresponding to the first logical address, reads read data stored in an area indicated by the first device address, and transmits the read data to the host computer.
    Type: Application
    Filed: January 28, 2015
    Publication date: January 18, 2018
    Inventors: Hirotoshi AKAIKE, Norio SHIMOZONO, Kazushi NAKAGAWA
  • Publication number: 20180018273
    Abstract: A server LPAR operating in a virtualized computer shares pages with client LPARs using a shared memory region (SMR). A virtualization function of the computer receives a get-page-ID request associated with a client LPAR to identify a physical page corresponding to a shared page included in the SMR. The virtualization function requests the server LPAR to provide an identity of the physical page. The virtualization function receives a page-ID response comprising the identity of a server LPAR logical page that corresponds to the physical page. The virtualization element determines a physical page identity and communicates the physical page identity to the client LPAR. The virtualization element receives a page ID enter request and enters an identity of the physical page into a translation element of the computer to associate a client LPAR logical page with the physical page.
    Type: Application
    Filed: July 14, 2016
    Publication date: January 18, 2018
    Inventors: Ramanjaneya S. Burugula, Niteesh K. Dubey, Joefon Jann, Pratap C. Pattnaik, Hao Yu
  • Publication number: 20180018274
    Abstract: A marking capability is used to provide an indication of whether a block of memory is being used by a guest control program to back an address translation structure. The marking capability includes setting an indicator in one or more locations associated with the block of memory. In a further aspect, the marking capability includes resetting the one or more indicators to indicate that the block of memory is no longer being used by the guest control program to back the address translation structure.
    Type: Application
    Filed: July 18, 2016
    Publication date: January 18, 2018
    Inventors: Jonathan D. Bradbury, Michael K. Gschwind
  • Publication number: 20180018275
    Abstract: A marking capability is used to provide an indication of whether a block of memory is being used by a guest control program to back an address translation structure. The marking capability includes setting an indicator in one or more locations associated with the block of memory. In a further aspect, the marking capability includes a purging capability that limits the purging of translation look-aside buffers and other such structures based on the marking.
    Type: Application
    Filed: July 18, 2016
    Publication date: January 18, 2018
    Inventors: Jonathan D. Bradbury, Christian Jacobi, Anthony Saporito
  • Publication number: 20180018276
    Abstract: A marking capability is used to provide an indication of whether a block of memory is backing an address translation structure of a control program being managed by a virtual machine manager. By providing the marking, the virtual machine manager may check the indication prior to making paging decisions. With this information, a hint may be provided to the hardware to be used in decisions relating to purging associated address translation structures, such as translation look-aside buffer (TLB) entries.
    Type: Application
    Filed: July 18, 2016
    Publication date: January 18, 2018
    Inventors: Jonathan D. Bradbury, Michael K. Gschwind
  • Publication number: 20180018277
    Abstract: Managing memory of a computing environment. A determination is made as to whether a block of memory is being used to back an address translation structure used by a guest program. The block of memory is a block of host memory, and the guest program is managed by a virtual machine manager that further manages the host memory. A memory management action is performed based on whether the block of memory is being used to back the address translation structure.
    Type: Application
    Filed: July 18, 2016
    Publication date: January 18, 2018
    Inventors: Jonathan D. Bradbury, Michael K. Gschwind
  • Publication number: 20180018278
    Abstract: A marking capability is used to provide an indication of whether a block of memory is being used by a guest control program to back an address translation structure. The marking capability includes setting an indicator in one or more locations associated with the block of memory. In a further aspect, the marking capability includes a purging capability that limits the purging of translation look-aside buffers and other such structures based on the marking.
    Type: Application
    Filed: July 18, 2016
    Publication date: January 18, 2018
    Inventors: Jonathan D. Bradbury, Christian Jacobi, Anthony Saporito
  • Publication number: 20180018279
    Abstract: A marking capability is used to provide an indication of whether a block of memory is backing an address translation structure of a control program being managed by a virtual machine manager. By providing the marking, the virtual machine manager may check the indication prior to making paging decisions. With this information, a hint may be provided to the hardware to be used in decisions relating to purging associated address translation structures, such as translation look-aside buffer (TLB) entries.
    Type: Application
    Filed: July 18, 2016
    Publication date: January 18, 2018
    Inventors: Jonathan D. Bradbury, Michael K. Gschwind
  • Publication number: 20180018280
    Abstract: A marking capability is used to provide an indication of whether a block of memory is being used by a guest control program to back an address translation structure. The marking capability includes setting an indicator in one or more locations associated with the block of memory. In a further aspect, the marking capability includes an invalidation facility based on the setting of the indicators.
    Type: Application
    Filed: July 18, 2016
    Publication date: January 18, 2018
    Inventors: Jonathan D. Bradbury, Michael K. Gschwind, Lisa Cranton Heller, Christian Jacobi, Damian L. Osisek, Anthony Saporito
  • Publication number: 20180018281
    Abstract: A marking capability is used to provide an indication of whether a block of memory is backing an address translation structure of a control program being managed by a virtual machine manager. By providing the marking, the virtual machine manager may check the indication prior to making paging decisions. With this information, a hint may be provided to the hardware to be used in decisions relating to purging associated address translation structures, such as translation look-aside buffer (TLB) entries.
    Type: Application
    Filed: July 18, 2016
    Publication date: January 18, 2018
    Inventors: Jonathan D. Bradbury, Lisa Cranton Heller, Christian Jacobi, Damian L. Osisek, Anthony Saporito
  • Publication number: 20180018282
    Abstract: Increasing the scope of local purges of structures associated with address translation. A hardware thread of a physical core of a machine configuration issues a purge request. A determination is made as to whether the purge request is a local request. Based on the purge request being a local request, entries of a structure associated with address translation are purged on at least multiple hardware threads of a set of hardware threads of the the machine configuration.
    Type: Application
    Filed: July 18, 2016
    Publication date: January 18, 2018
    Inventors: Jonathan D. Bradbury, Fadi Y. Busaba, Lisa Cranton Heller
  • Publication number: 20180018283
    Abstract: Selective purging of guest entries of structures associated with address translation. A request to purge entries of a structure associated with address translation is obtained. Based on obtaining the request, a determination is made as to whether selective purging of the structure associated with address translation is to be performed. Based on determining that selective purging is to be performed, one or more entries of the structure associated with address translation are purged. The selectively purging includes clearing the one or more entries of the structure associated with address translation for a selected guest operating system of the computing environment and leaving one or more other entries of one or more other guest operating systems in the structure associated with address translation. The selected guest operating system and the one or more other guest operating systems are managed by a host of the computing environment.
    Type: Application
    Filed: July 18, 2016
    Publication date: January 18, 2018
    Inventors: Christian Borntraeger, Jonathan D. Bradbury, Lisa Cranton Heller, Christian Jacobi, Damian L. Osisek, Anthony Saporito, Martin Schwidefsky
  • Publication number: 20180018284
    Abstract: Selective purging of entries of structures associated with address translation. A request to purge entries of a structure associated with address translation is obtained. Based on obtaining the request, a determination is made as to whether selective purging of the structure associated with address translation is to be performed. Based on determining that selective purging is to be performed, one or more entries of the structure associated with address translation are purged. The selectively purging includes clearing the one or more entries of the structure associated with address translation for a host of the computing environment and leaving one or more entries of one or more guest operating systems in the structure associated with address translation. The one or more guest operating systems are managed by the host.
    Type: Application
    Filed: July 18, 2016
    Publication date: January 18, 2018
    Inventors: Christian Borntraeger, Jonathan D. Bradbury, Lisa Cranton Heller, Christian Jacobi, Martin Schwidefsky
  • Publication number: 20180018285
    Abstract: A method begins by a processing module receiving an encoded data slice for storage in memory that is organized as a plurality of log files and identifying a log file based on information regarding the encoded data slice to produce an identified log file, wherein the identified log file is storing at least one other encoded data slice. The method continues with the processing module comparing storage parameters of the identified log file with desired storage parameters associated with the encoded data slice. The method continues with the processing module attempting to identify a second log file based on an alternate log file storage protocol when the storage parameters of the identified log file compare unfavorably with the desired storage parameters and when the second log file is identified, storing the encoded data slice in the second log file.
    Type: Application
    Filed: October 11, 2011
    Publication date: January 18, 2018
    Applicant: CLEVERSAFE, INC.
    Inventors: Ilya Volvovski, Andrew Baptist, Greg Dhuse
  • Publication number: 20180018286
    Abstract: Provided is a method for configuring the functional capabilities of a computer system. The computer system may include a persistent memory and a replaceable functional unit. The method may include transferring, in response to a repair action for the functional unit, enablement data that is stored on the functional unit to the persistent memory. The enablement data may specify one or more functional capabilities of the functional unit that are enabled. The method may further include erasing the enablement data from the functional unit after it has been transferred to the persistent storage. The method may further include obtaining a second unique identification item from a replacement unit. The method may further include obtaining new enablement data. The new enablement data may be transferred to the replacement unit.
    Type: Application
    Filed: September 14, 2016
    Publication date: January 18, 2018
    Inventors: Christine Axnix, Franz Hardt, Marco Kraemer, Jakob C. Lang
  • Publication number: 20180018287
    Abstract: A memory device including at least one memory location for storing information representing data written using a first encryption/decryption method, and a read channel using a second encryption/decryption method for reading and decrypting information as written is disclosed. The memory device also includes an apparatus that prevents the reading of the at least one memory location using the second encryption/decryption method, in response to an indication that the at least one memory location was written using the first encryption/decryption method. In another embodiment, a reading of a predefined or custom code is returned in response to an indication of another encryption/decryption method.
    Type: Application
    Filed: September 14, 2017
    Publication date: January 18, 2018
    Inventors: William Jared WALKER, Cory LAPPI, Darin Edward GERHART, Daniel Robert LIPPS
  • Publication number: 20180018288
    Abstract: In one embodiment, an apparatus includes: at least one core to execute instructions, the at least one core formed on a semiconductor die; a first memory formed on the semiconductor die, the first memory comprising a non-volatile random access memory, the first memory to store a first entry to be a monotonic counter, the first entry including a value field and a status field; and a control circuit, wherein the control circuit is to enable access to the first entry if the apparatus is in a secure mode and otherwise prevent the access to the first entry. Other embodiments are described and claimed.
    Type: Application
    Filed: July 14, 2016
    Publication date: January 18, 2018
    Inventors: Prashant Dewan, Siddhartha Chhabra, David M. Durham, Karanvir S. Grewal, Alpa T. Narendra Trivedi
  • Publication number: 20180018289
    Abstract: The underlying aim of the disclosure, which relates to a method for recognizing software applications and user inputs is to indicate a solution by a recognition of a software application currently implemented by the user as well as inputs that have been made is made possible without additional interfaces provided by manufacturers of car operating systems. The aim is achieved in that the data decoded in the signal decoder is subjected to a first analysis, in which, by means of data stored in a database, an association of the decoded data with a known software application occurs.
    Type: Application
    Filed: July 28, 2017
    Publication date: January 18, 2018
    Inventors: Stephan Preussler, Alexander Van Laack, Stephan Kuhn
  • Publication number: 20180018290
    Abstract: Example implementations relate to a storage memory direct access (SMDA) provider. The SMDA provider may pin a storage memory region to a memory address of a consumer machine, the storage memory region corresponding to a storage range of a storage device requested by the consumer machine. The SMDA provider may atomically commit data in the storage memory region accessed by the consumer machine via the memory address.
    Type: Application
    Filed: September 23, 2015
    Publication date: January 18, 2018
    Inventors: Boris Zuckerman, Douglas L. Voigt, Suparna Bhattacharya
  • Publication number: 20180018291
    Abstract: In one form, a memory controller includes a command queue and an arbiter. The command queue receives and stores memory access requests. The arbiter includes a plurality of sub-arbiters for providing a corresponding plurality of sub-arbitration winners from among the memory access requests during a controller cycle, and for selecting among the plurality of sub-arbitration winners to provide a plurality of memory commands in a corresponding controller cycle. In another form, a data processing system includes a memory accessing agent for providing memory accesses requests, a memory system, and the memory controller coupled to the memory accessing agent and the memory system.
    Type: Application
    Filed: July 15, 2016
    Publication date: January 18, 2018
    Applicant: Advanced Micro Devices, Inc.
    Inventors: James R. Magro, Kedarnath Balakrishnan, Jackson Peng, Hideki Kanayama
  • Publication number: 20180018292
    Abstract: Systems and methods are disclosed for resolving bus hang in a computing device. An exemplary system comprises a bus operating in accordance with an interface clock, and a controller in communication with the bus. The controller comprises a finite state machine, where the finite state machine is configured to receive a clock signal from the interface clock and a command signal originating external to the controller. The controller also comprising hang detection logic configured to receive one or more signals that the finite state machine is active, monitor the interface clock, and generate an event notification in response to the interface clock turning off while the finite state machine is active. The controller further comprises a trap handler in communication with the hang detection logic, the trap handler configured to send an interrupt in response to the event notification.
    Type: Application
    Filed: July 13, 2016
    Publication date: January 18, 2018
    Inventors: KIRAN KUMAR MALIPEDDI, YOSSI AMON, GRAHAM ROFF, CHRISTOPHER KONG YEE CHUN, RAJESH CHAVA
  • Publication number: 20180018293
    Abstract: A method, a controller, and a system for service flow control in an object-based storage system are disclosed. The method is: receiving, by a controller, a first object IO request; acquiring a processing quantity threshold and a to-be-processed quantity; if the to-be-processed quantity is less than the processing quantity threshold, sending the first object IO request to a storage device client, and updating the to-be-processed quantity; receiving a first response message replied by the storage device client for the first object IO request, where the first response message carries a processing result of the first object IO request; and adjusting the processing quantity threshold according to a received processing result of an object IO request when a preset condition is met. The storage device is not overloaded with object IO requests and can use all resources to effectively, thereby improving performance and a success rate of the object-based storage system.
    Type: Application
    Filed: September 26, 2017
    Publication date: January 18, 2018
    Applicant: HUAWEI TECHNOLOGIES CO.,LTD.
    Inventor: Yanqun Tong
  • Publication number: 20180018294
    Abstract: A data storage device is provided. The data store device includes a first printed circuit board (PCB) comprising a main transmission line formed on at least one surface of the first PCB and/or within the first PCB, a memory controller and a plurality of nonvolatile memory devices. The memory controller is provided on the first PCB. The plurality of nonvolatile memory devices are provided on the first PCB. The plurality of nonvolatile memory devices are connected to the memory controller through a channel and exchange data with the memory controller. The channel includes a data transmission line connecting data pads of the memory controller and the nonvolatile memory devices. The data transmission line comprises the main transmission pattern and an open stub contacting the main transmission pattern. The open stub does not contact any other conductor other than the main transmission pattern.
    Type: Application
    Filed: June 15, 2017
    Publication date: January 18, 2018
    Inventors: Ji-Woon PARK, Sun-Ki YUN, Kwang-Soo PARK
  • Publication number: 20180018295
    Abstract: The invention is used for a dock between a battery-operated device and a power adapter. The method includes: a) providing a USB dock with a first USB type-C connector and a second USB type-C connector, wherein the two USB type-C connectors are electrically connected through a switch module; b) connecting a first USB device to the first USB type-C connector; c) verifying whether the first USB device sends out a USB host command or not; d) setting the first USB type-C connector and the second USB type-C connector to serve as a host port and a device port, respectively, if yes in step c); and e) setting the first USB type-C connector and the second USB type-C connector to serve as a device port and a host port, respectively, if no in step c).
    Type: Application
    Filed: July 18, 2016
    Publication date: January 18, 2018
    Applicant: I/O INTERCONNECT, LTD.
    Inventors: Johnny Chen, Chih-Hsiung Chang, Tsung-Min Chen
  • Publication number: 20180018296
    Abstract: A flow control protocol for an audio bus is disclosed. In an exemplary aspect, an audio source may push content to an audio sink over one or more cascaded audio bus segments or links. The audio source will send an indication to the audio sink that data is coming, and the audio sink should manipulate the data accordingly. Conversely, the audio source may also send an indication to the audio sink that, in a particular sample-event in a particular future sample-window, no data is present and the audio sink may disregard any bits in the corresponding sample-event. In another exemplary aspect, the audio sink may request data from the audio source with a receive ready indication. The audio source then sends data in a corresponding time slot for processing by the audio sink. Likewise, the audio sink may provide an indication that the audio sink cannot accept more data through a negative receive ready indication.
    Type: Application
    Filed: June 26, 2017
    Publication date: January 18, 2018
    Inventor: Lior Amarilio
  • Publication number: 20180018297
    Abstract: Embodiments include methods, systems, and computer program products for performing synchronous data I/O. Aspects include a processor of computer system sending a store block to request data from a device through a PCIe connection, requested data having a predetermined number of data blocks, and the processor executing a data transaction loop to retrieve requested data. Executing the data transaction loop may include writing to a table prefetch trigger register on host bridge to queue up speculative prefetches in ETU for each data block. The host bridge may perform a first speculative prefetch to install a device table entry in a device table cache. The processor may further perform a second speculative prefetch to install an address translation in an address translation cache. The host bridge processes the data block received through direct memory access over the PCIe connection using the prefetched device table entry and address translation.
    Type: Application
    Filed: July 13, 2016
    Publication date: January 18, 2018
    Inventors: David F. Craddock, Matthias Klein, Eric N. Lais
  • Publication number: 20180018298
    Abstract: Methods and apparatus for a low energy accelerator processor architecture with short parallel instruction word. An integrated circuit includes a system bus having a data width N, where N is a positive integer; a central processor unit coupled to the system bus and configured to execute instructions retrieved from a memory coupled to the system bus; and a low energy accelerator processor coupled to the system bus and configured to execute instruction words retrieved from a low energy accelerator code memory, the low energy accelerator processor having a plurality of execution units including a load store unit, a load coefficient unit, a multiply unit, and a butterfly/adder ALU unit, each of the execution units configured to perform operations responsive to op-codes decoded from the retrieved instruction words, wherein the width of the instruction words is equal to the data width N. Additional methods and apparatus are disclosed.
    Type: Application
    Filed: September 25, 2017
    Publication date: January 18, 2018
    Inventors: Srinivas Lingam, Seok-Jun Lee, Johann Zipperer, Manish Goel