Patents Issued in January 12, 2017
  • Publication number: 20170010959
    Abstract: An approach for predictively scoring test case results in real-time. Test case results associated with a test run are received by a software testing environment. Using predictive statistical models, test case results and attribute relationships are matched against model rules and test case history. A statistical correlation and confidence parameter provide the ability to generate test case relationships for predicting the outcome of other test cases in the test run. The test case relationships are transformed into scoring results and output for the further processing.
    Type: Application
    Filed: July 7, 2015
    Publication date: January 12, 2017
    Inventors: Kevin B. Smith, Andrew J. Thompson, David R. Waddling
  • Publication number: 20170010960
    Abstract: The memory control unit includes a descriptor fetch block suitable for fetching a descriptor from a volatile memory; an instruction fetch block suitable for fetching an instruction set from an instruction memory through an address information, wherein the instruction fetch block obtains the address information from the instruction memory through an index information included in the fetched descriptor; and a memory instruction generation block suitable for generating a memory instruction by combining a descriptor parameter value included in the fetched descriptor to the fetched instruction set.
    Type: Application
    Filed: September 17, 2015
    Publication date: January 12, 2017
    Inventors: Jae Hyeong JEONG, Joong Hyun AN, Kwang Hyun KIM, Jae Woo KIM
  • Publication number: 20170010961
    Abstract: A wear leveling method for a rewritable non-volatile memory module is provided. The method includes: recording a timestamp for each of physical erasing units storing valid data according to a programming sequence of the physical erasing units storing valid data among the physical erasing units, and recording an erase count for each of physical erasing units. The method also includes: selecting a first physical erasing unit from the physical erasing units storing valid data according to the timestamps, selecting a second physical erasing unit from physical erasing units not storing valid data among the physical erasing units according to the erase counts, and writing valid data of the first physical erasing unit into the second physical erasing unit, and marking the first physical erasing unit as a physical erasing unit not storing valid data.
    Type: Application
    Filed: August 12, 2015
    Publication date: January 12, 2017
    Inventor: Kok-Yong Tan
  • Publication number: 20170010962
    Abstract: A host device is provided. The host device includes a processor and an interface. The processor generates a physical block address and a solid state disk (SSD) identification code according to a logical block address of an access operation. The interface is coupled to the processor. The processor indicates one of a plurality of SSDs through the interface according to the SSD identification code to access data at the physical block address.
    Type: Application
    Filed: July 5, 2016
    Publication date: January 12, 2017
    Inventors: Xueshi YANG, Ningzhong MIAO
  • Publication number: 20170010963
    Abstract: A method, information processing system, and computer readable storage medium, vary a maximum heap memory size for one application of a plurality of applications based on monitoring garbage collection activity levels for the plurality of applications, each application including a heap memory, and unused memory in the heap memory being reclaimed by a garbage collector.
    Type: Application
    Filed: September 22, 2016
    Publication date: January 12, 2017
    Applicant: International Business Machines Corporation
    Inventors: Norman BOBROFF, Arun IYENGAR, Peter WESTERINK
  • Publication number: 20170010964
    Abstract: Embodiments are disclosed for replacing one or more pages of a memory to level wear on the memory. In one embodiment, a system includes a page fault handling function and a memory address mapping function. Upon receipt of a page fault, the page fault handling function maps an evicted virtual memory address to a stressed page and maps a stressed virtual memory address to a free page using the memory address mapping function.
    Type: Application
    Filed: July 11, 2016
    Publication date: January 12, 2017
    Inventors: Trung Diep, Eric Linstadt
  • Publication number: 20170010965
    Abstract: A computing system performs an environment-aware cache flushing method. When a processor in the system receives a signal to flush at least a portion of the caches to the system memory, the processor determines a flushing mechanism among multiple candidate flushing mechanisms. The processor also determines one or more of the active processors in the system for performing the flushing mechanism. The determinations are based on the extent of flushing indicated in the signal and a runtime environment that includes the number of active processors. The system then flushes the caches to the system memory according to the flushing mechanism.
    Type: Application
    Filed: November 9, 2015
    Publication date: January 12, 2017
    Inventors: Chia-Hao HSU, Fan-Lei LIAO, Shun-Chih YU
  • Publication number: 20170010966
    Abstract: Systems and methods that facilitate reduced latency via stashing in multi-level cache memory architectures of systems on chips (SoCs) are provided. One method involves stashing, by a device includes a plurality of multi-processor central processing unit cores, first data into a first cache memory of a plurality of cache memories, the plurality of cache memories being associated with a multi-level cache memory architecture. The method also includes generating control information including: a first instruction to cause monitoring contents of a second cache memory of the plurality of cache memories to determine whether a defined condition is satisfied for the second cache memory; and a second instruction to cause prefetching the first data into the second cache memory of the plurality of cache memories based on a determination that the defined condition is satisfied.
    Type: Application
    Filed: July 10, 2015
    Publication date: January 12, 2017
    Inventor: Millind Mittal
  • Publication number: 20170010967
    Abstract: A method is shown that eliminates the need for a dedicated reorder buffer register bank or memory space in a multi level cache system. As data requests from the L2 cache may be returned out of order, the L1 cache uses it's cache memory to buffer the out of order data and provides the data to the requesting processor in the correct order from the buffer.
    Type: Application
    Filed: September 20, 2016
    Publication date: January 12, 2017
    Inventors: Ramakrishnan Venkatasubramanian, Oluleye Olorode, Hung Ong
  • Publication number: 20170010968
    Abstract: The present technology relates to managing data caching in processing nodes of a massively parallel processing (MPP) database system. A directory is maintained that includes a list and a storage location of the data pages in the MPP database system. Memory usage is monitored in processing nodes by exchanging memory usage information with each other. Each of the processing nodes manages a list and a corresponding amount of available memory in each of the processing nodes based on the memory usage information. Data pages are read from a memory of the processing nodes in response to receiving a request to fetch the data pages, and a remote memory manager is queried for available memory in each of the processing nodes in response to receiving the request. The data pages are distributed to the memory of the processing nodes having sufficient space available for storage during data processing.
    Type: Application
    Filed: July 8, 2015
    Publication date: January 12, 2017
    Inventors: Huaizhi Li, Qingqing Zhou, Guogen Zhang
  • Publication number: 20170010969
    Abstract: In a method for processing cache data of a computing device, a storage space of the storage device is divided into sections, and a section number of each data block in the storage device is determined according one of the sections in the storage device which each data block belongs to. A field is added for each data block in the storage device to record a section number of each data block in the storage device. When the cache data in the cache memory requires to be written back to the storage device, cache data with the section number is searched from all of the cache data in the cache memory to be written back to a corresponding section in the storage device.
    Type: Application
    Filed: July 8, 2015
    Publication date: January 12, 2017
    Inventors: CHUN-HSIEH CHIU, HSIANG-TING CHENG
  • Publication number: 20170010970
    Abstract: The disclosed embodiments relate to a system that generates prefetches for a stream of data accesses with multiple strides. During operation, while a processor is generating the stream of data accesses, the system examines a sequence of strides associated with the stream of data accesses. Next, upon detecting a pattern having a single constant stride in the examined sequence of strides, the system issues prefetch instructions to prefetch a sequence of data cache lines consistent with the single constant stride. Similarly, upon detecting a recurring pattern having two or more different strides in the examined sequence of strides, the system issues prefetch instructions to prefetch a sequence of data cache lines consistent with the recurring pattern having two or more different strides.
    Type: Application
    Filed: July 8, 2015
    Publication date: January 12, 2017
    Applicant: ORACLE INTERNATIONAL CORPORATION
    Inventor: Yuan C. Chou
  • Publication number: 20170010971
    Abstract: A method includes, in a processor, processing program code that includes memory-access instructions, wherein at least some of the memory-access instructions include symbolic expressions that specify memory addresses in an external memory in terms of one or more register names. Based on respective formats of the memory addresses specified in the symbolic expressions, a sequence of load instructions that access a predictable pattern of memory addresses in the external memory is identified. At least one cache line that includes a plurality of data values is retrieved from the external memory. Based on the predictable pattern, two or more of the data values that are requested by respective load instructions in the sequence are saved from the cache line to the internal memory. The saved data values are assigned to be served from the internal memory to one or more instructions that depend on the respective load instructions.
    Type: Application
    Filed: July 9, 2015
    Publication date: January 12, 2017
    Inventors: Noam Mizrahi, Jonathan Friedmann
  • Publication number: 20170010972
    Abstract: A method includes, in a processor, processing program code that includes memory-access instructions, wherein at least some of the memory-access instructions include symbolic expressions that specify memory addresses in an external memory in terms of one or more register names. At least first and second load instructions that access a same memory address in the external memory are identified in the program code, based on respective formats of the memory addresses specified in the symbolic expressions of the load instructions. An outcome of at least one of the load instructions is assigned to be served from an internal memory in the processor.
    Type: Application
    Filed: July 9, 2015
    Publication date: January 12, 2017
    Inventors: Noam Mizrahi, Jonathan Friedmann
  • Publication number: 20170010973
    Abstract: A method includes, in a processor, processing program code that includes memory-access instructions, wherein at least some of the memory-access instructions include symbolic expressions that specify memory addresses in an external memory in terms of one or more register names. At least a store instruction and a subsequent load instruction that access the same memory address in the external memory are identified, based on respective formats of the memory addresses specified in the symbolic expressions. An outcome of at least one of the memory-access instructions is assigned to be served to one or more instructions that depend on the load instruction, from an internal memory in the processor.
    Type: Application
    Filed: July 9, 2015
    Publication date: January 12, 2017
    Inventors: Noam Mizrahi, Jonathan Friedmann
  • Publication number: 20170010974
    Abstract: Method and apparatus to efficiently manage data in caches. Data in caches may be managed based on priorities assigned to the data. Data may be requested by a process using a virtual address of the data. The requested data may be assigned a priority by a component in a computer system called an address range priority assigner (ARP). The ARP may assign a particular priority to the requested data if the virtual address of the requested data is within a particular range of virtual addresses. The particular priority assigned may be high priority and the particular range of virtual addresses may be smaller than a cache's capacity.
    Type: Application
    Filed: September 26, 2016
    Publication date: January 12, 2017
    Inventors: Simon Steely, JR., Samantika S. Sury, William C. Hasenplaugh
  • Publication number: 20170010975
    Abstract: A nonvolatile memory system is described with novel architecture coupling nonvolatile storage memory with random access volatile memory. New commands are included to enhance the read and write performance of the memory system.
    Type: Application
    Filed: February 8, 2016
    Publication date: January 12, 2017
    Inventor: G.R. Mohan Rao
  • Publication number: 20170010976
    Abstract: A system may include a memory that includes a plurality of pages, a processor, and a translation lookaside buffer (TLB) that includes a plurality of entries. The processor may be configured to access data from a subset of the plurality of pages dependent upon a first virtual address. The TLB may be configured to compare the first virtual address to respective address information included in each entry of the plurality of entries. The TLB may be further configured to add a new entry to the plurality of entries in response to a determination that the first virtual address fails to match the respective address information included in each entry of the plurality of entries. The new entry may include address information corresponding to at least two pages of the subset of the plurality pages.
    Type: Application
    Filed: July 8, 2015
    Publication date: January 12, 2017
    Inventor: Yuan Chou
  • Publication number: 20170010977
    Abstract: Systems and methods for accessing a unified translation lookaside buffer (TLB) are disclosed. A method includes receiving an indicator of a level one translation lookaside buffer (L1TLB) miss corresponding to a request for a virtual address to physical address translation, searching a cache that includes virtual addresses and page sizes that correspond to translation table entries (TTEs) that have been evicted from the L1TLB, where a page size is identified, and searching a second level TLB and identifying a physical address that is contained in the second level TLB. Access is provided to the identified physical address.
    Type: Application
    Filed: September 26, 2016
    Publication date: January 12, 2017
    Inventors: Karthikeyan Avudaiyappan, Mohammad Abdallah
  • Publication number: 20170010978
    Abstract: Methods and systems are provided for fork-safe memory allocation from memory-mapped files. A child process may be provided a memory mapping at a same virtual address as a parent process, but the memory mapping may map the virtual address to a different location within a file than for the parent process.
    Type: Application
    Filed: September 26, 2016
    Publication date: January 12, 2017
    Inventors: Timothy A. Stabrawa, Andrew S. Poling, Zachary A. Cornelius, Jesse I. Taylor, John Overton
  • Publication number: 20170010979
    Abstract: In a method for managing memory pages, responsive to determining that a server is experiencing memory pressure, one or more processors identifying a first memory page in a listing of memory pages in the server. The method further includes determining whether the first memory page corresponds to a logical partition (LPAR) of the server that is scheduled to undergo an operation to migrate data stored on memory pages of the LPAR to another server. The method further includes, responsive to determining that the first memory page does correspond to a LPAR of the server that is scheduled to undergo an operation to migrate data, determining whether to evict the first memory page based on a memory page state associated with the first memory page. The method further includes, responsive to determining to evict the first memory page, evicting data stored in the first memory page to a paging space.
    Type: Application
    Filed: September 29, 2016
    Publication date: January 12, 2017
    Inventors: Keerthi B. Kumar, Swetha N. Rao
  • Publication number: 20170010980
    Abstract: A circuit is for protecting memory address data. The circuit may include an input data bus configured to receive write data to be written to a memory device, and an address bus configured to receive a corresponding write address. The circuit may also include an output data bus, and an address protection circuit coupled to the input data, address, and output data buses and configured to generate an address protection value based on the corresponding write address, and generate modified write data, on the output data bus. The modified write data includes the write data and the address protection value. The output data bus may have a width greater than a width of the input data bus.
    Type: Application
    Filed: February 29, 2016
    Publication date: January 12, 2017
    Inventors: Eric BERNASCONI, Richard O'CONNOR
  • Publication number: 20170010981
    Abstract: A memory circuit using resistive random access memory (ReRAM) arrays in a secure element. The ReRAM arrays can be configured as content addressable memories (CAMs) or random access memories (RAMs) on the same die, with the control circuitry for performing comparisons of reference patterns and input patterns located outside of the ReRAM arrays. By having ReRAM arrays configured as CAMs and RAMs on the same die, certain reference patterns can be stored in CAMs and others in RAMs depending on security needs. For additional security, a heater can be used to erase reference patterns in the ReRAM arrays when desired.
    Type: Application
    Filed: September 29, 2015
    Publication date: January 12, 2017
    Inventor: Bertrand F. Cambou
  • Publication number: 20170010982
    Abstract: In an aspect, a cache memory device receives a request to read an instruction or data associated with a memory device. The request includes a first realm identifier and a realm indicator bit, where the first realm identifier enables identification of a realm that includes one or more selected regions in the memory device. The cache memory device determines whether the first realm identifier matches a second realm identifier in a cache tag when the instruction or data is stored in the cache memory device, where the instruction or data stored in the cache memory device has been decrypted based on an ephemeral encryption key associated with the second realm identifier when the first realm identifier indicates the realm and when the realm indicator bit is enabled. The cache memory device transmits the instruction or data when the first realm identifier matches the second realm identifier.
    Type: Application
    Filed: March 15, 2016
    Publication date: January 12, 2017
    Inventors: Roberto Avanzi, David Hartley, Rosario Cammarota
  • Publication number: 20170010983
    Abstract: An input device identifying a computer system includes: a memory, for storing a configuration descriptor, an interface descriptor, a human interface device descriptor and an end point descriptor; and a microprocessor, where when the input device is connected to a universal serial bus interface of the computer system, the microprocessor performs transmission of a communications protocol with the computer system by using multiple descriptors stored in the memory including the configuration descriptor, the interface descriptor, the human interface device descriptor, and the end point descriptor, and the microprocessor determines a type of the computer system according to a feature parameter associated with the multiple descriptors in the communications protocol. The present invention also provides an identification method of an input device identifying a computer system.
    Type: Application
    Filed: February 17, 2016
    Publication date: January 12, 2017
    Inventor: Yuan Jung CHANG
  • Publication number: 20170010984
    Abstract: The embodiments of the present disclosure identify a target chip from among multiple chips coupled to a shared bus and customize an optimization parameter for the particular chip. Stated differently, in a communication system where only one chip (or a subset of chips) on a shared bus is the intended target, the system can customize an optimization parameter for the specific location of the target chip on the bus. As new data is received that is intended for a different chip—i.e., the target chip changes—the system can dynamically change the parameter based on the location of the new target chip on the bus.
    Type: Application
    Filed: August 24, 2015
    Publication date: January 12, 2017
    Inventors: Layne A. BERGE, Benjamin A. FOX, Wesley D. MARTIN, George R. ZETTLES, IV
  • Publication number: 20170010985
    Abstract: The embodiments of the present disclosure identify a target chip from among multiple chips coupled to a shared bus and customize an optimization parameter for the particular chip. Stated differently, in a communication system where only one chip (or a subset of chips) on a shared bus is the intended target, the system can customize an optimization parameter for the specific location of the target chip on the bus. As new data is received that is intended for a different chip—i.e., the target chip changes—the system can dynamically change the parameter based on the location of the new target chip on the bus.
    Type: Application
    Filed: August 24, 2015
    Publication date: January 12, 2017
    Inventors: Layne A. BERGE, Benjamin A. FOX, Wesley D. MARTIN, George R. ZETTLES, IV
  • Publication number: 20170010986
    Abstract: A resource arbiter in a system with multiple shared resources and multiple requestors may implement an adaptive resource management approach that takes advantage of time-varying requirements for granting access to at least some of the shared resources. For example, due to pipelining, signal timing issues, or a lack of information, more resources than are required to perform a task may need to be available for allocation to a requestor before its request for the needed resources is granted. The requestor may request only the resources it needs, relying on the arbiter to determine whether additional resources are required in order to grant the request. The arbiter may park a high priority requestor on idle resources, thus allowing requests for those resources by the high priority requestor to be granted on the first clock cycle of a request. Other requests may not be granted until at least a second clock cycle.
    Type: Application
    Filed: July 9, 2015
    Publication date: January 12, 2017
    Inventor: John Deane Coddington
  • Publication number: 20170010987
    Abstract: The embodiments of the present disclosure identify a target chip from among multiple chips coupled to a shared bus and customize an optimization parameter for the particular chip. Stated differently, in a communication system where only one chip (or a subset of chips) on a shared bus is the intended target, the system can customize an optimization parameter for the specific location of the target chip on the bus. As new data is received that is intended for a different chip—i.e., the target chip changes—the system can dynamically change the parameter based on the location of the new target chip on the bus.
    Type: Application
    Filed: July 8, 2015
    Publication date: January 12, 2017
    Inventors: Layne A. BERGE, Benjamin A. FOX, Wesley D. MARTIN, George R. ZETTLES, IV
  • Publication number: 20170010988
    Abstract: An activation method of a universal serial bus (USB) compatible flash device is disclosed, wherein the USB compatible flash device includes a controller and a pair of signal pins, and the controller includes a memory and a microprocessor. The activation method includes when the USB compatible flash device is coupled to a host, the pair of signal pins receiving a pair of predetermined signals, and transmitting the pair of predetermined signals to the microprocessor, wherein the pair of signal pins are different from a power line pin and a ground pin of the USB compatible flash device; when the microprocessor receives the pair of predetermined signals through the pair of signal pins, the microprocessor determining that a force event occurs; and after the microprocessor determines that the force event occurs, the microprocessor activating the USB compatible flash device according to an original activation program stored in the memory.
    Type: Application
    Filed: October 23, 2015
    Publication date: January 12, 2017
    Inventor: Hsuan-Ching Chao
  • Publication number: 20170010989
    Abstract: A control circuit of a memory device feeds a first clock received from a transmission control circuit of a host device back to a reception control circuit of the host device as a second clock. The reception control circuit controls data reception from the memory device in synchronization with the fed-back second clock.
    Type: Application
    Filed: July 8, 2016
    Publication date: January 12, 2017
    Applicant: MegaChips Corporation
    Inventor: Takahiko SUGAHARA
  • Publication number: 20170010990
    Abstract: An embedded storage device for use with a computer device is provided. The embedded storage device includes a microprocessor, a master storage unit, a slave storage unit, and a relay bus. The microprocessor provides a command signal and creates data transmission link to the computer device. The master storage unit has at least a master data pin, and a master control pin. The master control pin receives a command signal from the microprocessor. The slave storage unit has at least a slave data pin. The relay bus is coupled to the master storage unit and the slave storage unit to transmit the command signal from the master storage unit to the slave storage unit.
    Type: Application
    Filed: September 21, 2016
    Publication date: January 12, 2017
    Inventor: Lian Chun LEE
  • Publication number: 20170010991
    Abstract: In one embodiment, the present invention includes a processor that has an on-die storage such as a static random access memory to store an architectural state of one or more threads that are swapped out of architectural state storage of the processor on entry to a system management mode (SMM). In this way communication of this state information to a system management memory can be avoided, reducing latency associated with entry into SMM. Embodiments may also enable the processor to update a status of executing agents that are either in a long instruction flow or in a system management interrupt (SMI) blocked state, in order to provide an indication to agents inside the SMM. Other embodiments are described and claimed.
    Type: Application
    Filed: September 20, 2016
    Publication date: January 12, 2017
    Inventors: Mahesh Natu, Thanunathan Rangarajan, Gautam Doshi, Shamanna M. Datta, Baskaran Ganesan, Mohan J. Kumar, Rajesh S. Parthasarathy, Frank Binns, Rajesh Nagaraja Murthy, Robert C. Swanson
  • Publication number: 20170010992
    Abstract: Disclosed herein is a technique for maintaining a responsive user interface for a user while preserving battery life of a user device by dynamically determining the interrupt rate/interrupt time at the user device. Based on priority tier information associated with the I/O requests along with the directionality and size of the I/O requests, a determination can be made regarding how the interrupt rate/interrupt time can be adjusted to achieve acceptable user interface (UI) responsiveness and maximum power savings.
    Type: Application
    Filed: July 10, 2015
    Publication date: January 12, 2017
    Inventors: Christopher J. SARCONE, Manoj K. RADHAKRISHNAN, Etai ZALTSMAN
  • Publication number: 20170010993
    Abstract: In one example in accordance with the present disclosure, a computing system is provided. The computing system includes a first bus controller to control X bus lanes, a second bus controller to control Y bus lanes, a 2-to-1 X lane multiplexer, and a Y lane system component, where Y>X>0. X lanes from the first bus controller are coupled to the 2-to-1 X lane multiplexer. X lanes from the second bus controller are coupled to the 2-to-1 X lane multiplexer, and Y-X lanes from the second bus controller are coupled directly to the Y lane system component. In addition, X lanes from the 2-to-1 X lane multiplexer are coupled to the Y lane system component.
    Type: Application
    Filed: February 28, 2014
    Publication date: January 12, 2017
    Inventors: Roger A. PEARSON, Raphael GAY
  • Publication number: 20170010994
    Abstract: A universal input/output circuit for building automation is provided that may avoid issues related to capacitor soakage, thereby giving more accurate measurements of electric resistance. To mitigate capacitor soakage, the voltage between the input/output terminals is held constant. A programmable source drives a current through a resistor that connects to the input/output terminals. The circuit then measures a value of electrical resistance. The measurement yields a voltage signal which is transferred from the input of an analog-to-digital converter to the input of a digital-to-analog converter. A unity gain amplifier applies the output voltage of the digital-to-analog converter D/A to one of terminals. The circuit is configured such that the voltage signal at the output of the amplifier matches or substantially matches the voltage obtained from the resistance measurement.
    Type: Application
    Filed: July 1, 2016
    Publication date: January 12, 2017
    Applicant: Siemens Schweiz AG
    Inventor: Walter Stoll
  • Publication number: 20170010995
    Abstract: Obtaining data about a peripheral device deployed in a computing environment. A method includes transmitting a primary data stream across a shared communication channel between the peripheral device and a host hosting the peripheral device. The method further includes transmitting on the shared communication channel, a secondary state information stream of consecutively occurring messages with peripheral device state information.
    Type: Application
    Filed: July 10, 2015
    Publication date: January 12, 2017
    Inventors: Christopher James Robinson, Laura Marie Caulfield, Brian Charles Coyne, Mukesh Cooblal, Mark Alan Santaniello
  • Publication number: 20170010996
    Abstract: A computer I/O port system includes a CPU, a plurality of switches and a plurality of I/O ports connecting with each other by PCI Express buses. A first switch receives a pair of PCI Express signals from the CPU, converts them into a plurality of pairs of PCI Express signals and sends one pair thereof to a first I/O port and another pair thereof the a second switch. The second switch converts the another pair of PCI Express signals into a plurality of pairs of PCI Express signals and sends one pair thereof to a second I/O port and another pair to a third switch and so on until a last switch converts the pair of PCI Express signals it receives into a plurality of pairs of PCI Express signals and sends one pair thereof to a last I/O port. The I/O ports are USB ports.
    Type: Application
    Filed: August 14, 2015
    Publication date: January 12, 2017
    Inventors: SONG MA, MENG-LIANG YANG
  • Publication number: 20170010997
    Abstract: A USB hub device includes an upstream port and a downstream port. A USB control circuit of the USB control circuit includes an upstream interface; a downstream interface; a first switch circuit for coupling with the upstream port; a second switch circuit for coupling with the downstream port; a control signal transmission interface coupled with the first switch circuit and the second switch circuit; and a control unit, coupled with the control signal transmission interface, configured to operably control the first switch circuit and the second switch circuit through the control signal transmission interface, so that the first switch circuit selectively couples one of the upstream interface and the second switch circuit with the upstream port, while the second switch circuit selectively couples one of the downstream interface and the first switch circuit with the downstream port.
    Type: Application
    Filed: June 21, 2016
    Publication date: January 12, 2017
    Applicant: Realtek Semiconductor Corp.
    Inventors: Neng-Hsien LIN, Luo-Bin WANG, Chong LIU, Jian-Jhong ZENG
  • Publication number: 20170010998
    Abstract: A gateway apparatus (103) receives a request frame transmitted from a system controller (101) to an indoor unit 1 (111), and determines whether a sensor specified in the request frame is a sensor 1-A (121) which is connected with the indoor unit 1 (111) or a sensor 1-B (123) which is not connected with the indoor unit 1 (111). When the sensor specified in the request frame is the sensor 1-B (123), the gateway apparatus (103) acquires a measurement value from the sensor 1-B (123), and responds to the system controller (101) with the acquired measurement value of the sensor 1-B (123).
    Type: Application
    Filed: February 5, 2014
    Publication date: January 12, 2017
    Applicant: Mitsubishi Electric Corporation
    Inventors: Masanori HASHIMOTO, Yasuomi ANDO
  • Publication number: 20170010999
    Abstract: The embodiments of the present disclosure identify a target chip from among multiple chips coupled to a shared bus and customize an optimization parameter for the particular chip. Stated differently, in a communication system where only one chip (or a subset of chips) on a shared bus is the intended target, the system can customize an optimization parameter for the specific location of the target chip on the bus. As new data is received that is intended for a different chip—i.e., the target chip changes—the system can dynamically change the parameter based on the location of the new target chip on the bus.
    Type: Application
    Filed: August 24, 2015
    Publication date: January 12, 2017
    Inventors: Layne A. BERGE, Benjamin A. FOX, Wesley D. MARTIN, George R. ZETTLES, IV
  • Publication number: 20170011000
    Abstract: The disclosure relates to a participant station for a bus system and to a method for increasing the data rate of a bus system. The participant station comprises a device for receiving a message from at least one other participant station of the bus system via the bus system. In the bus system, an exclusive collision-free access of a participant station to a bus line of the bus system is ensured at least temporarily. The participant station also comprises a testing device for testing whether or not the received message is specified for the participant station and an error processing device for processing errors of the received message only when the test carried out by the testing device indicates that the received message is specified for the participant station.
    Type: Application
    Filed: January 22, 2015
    Publication date: January 12, 2017
    Inventors: Ralf Machauer, Simon Weissenmayer
  • Publication number: 20170011001
    Abstract: A USB control circuit of a USB hub device includes: an upstream MAC-layer circuit; a downstream MAC-layer circuit; a first USB PHY-layer circuit; a second USB PHY-layer circuit; a first switch circuit for communicating data with an upstream port through the first USB PHY-layer circuit; a second switch circuit for communicating data with a downstream port through the second USB PHY-layer circuit; a control signal transmission interface; a signal repeater circuit; and a control unit configured to operably control the first switch circuit and the second switch circuit through the control signal transmission interface, so that the first switch circuit selectively couples the upstream MAC-layer circuit or the signal repeater circuit with the first USB PHY-layer circuit, while the second switch circuit selectively couples the downstream MAC-layer circuit or the signal repeater circuit with the second USB PHY-layer circuit.
    Type: Application
    Filed: June 23, 2016
    Publication date: January 12, 2017
    Applicant: Realtek Semiconductor Corp.
    Inventors: Chong LIU, Luo-Bin WANG, Jian-Jhong ZENG, Neng-Hsien LIN
  • Publication number: 20170011002
    Abstract: A peripheral component interconnect express (PCIe) card may include a base card, a mezzanine card and mezz connectors. The base card may be coupled to a host device, and host a first group of solid state drives (SSDs). The mezzanine card may be stacked over the base card, and host a second group of SSDs. The mezz connectors may couple the base card with the mezzanine card, each of the mezz connectors corresponding to each of the second group of SSDs. The base card may include an edge connector suitable for coupling with the host device, a PCIe switch suitable for coupling the first and second groups of SSDs with the host device through the edge connector, and a first group of connectors suitable for coupling the first group of SSDs with the PCIe switch. The mezzanine card may include a second group of connectors suitable for coupling the second group of SSDs.
    Type: Application
    Filed: July 11, 2016
    Publication date: January 12, 2017
    Inventor: Seong Won SHIN
  • Publication number: 20170011003
    Abstract: A method, system and computer-usable medium are disclosed for performing a network traffic combination operation. With the network traffic combination operation, a plurality of input queues are defined by an operating system for an adapter based upon workload type (e.g., as determined by a transport layer). Additionally, the operating system defines each input queue to match a virtual memory architecture of the transport layer (e.g., one input queue is defined as 31 bit and other input queue is defined as 64 bit). When data is received off the wire as inbound data from a physical NIC, the network adapter associates the inbound data with the appropriate memory type. Thus, data copies are eliminated and memory consumption and associated storage management operations are reduced for the smaller bit architecture communications while allowing the operating system to continue executing in a larger bit architecture configuration.
    Type: Application
    Filed: August 10, 2015
    Publication date: January 12, 2017
    Inventors: Patrick G. Brown, Michael J. Fox, Jeffrey D. Haggar, Jerry W. Stevens
  • Publication number: 20170011004
    Abstract: A method and apparatus for generating harmonics using polynomial non-linear functions. Polynomial functions are used to produce harmonics of an input signal up to a predetermined order, and that match a preferred set of characteristics. The preferred characteristics include approximating a sine function to generate odd harmonics, and using a function with zero slope at ?1, 0, and +1 to generate even harmonics. The polynomial coefficients may be chosen such that most of the coefficients for the odd polynomial function are scaled by the same constant as the coefficients for the even polynomial function, so that the calculation is shared between the two polynomials. In one embodiment, a digital signal processor having a pipelined ALU and a MAC is used to calculate the desired polynomial non-linear functions.
    Type: Application
    Filed: July 6, 2016
    Publication date: January 12, 2017
    Applicant: Tempo Semiconductor, Inc.
    Inventor: Darrell Eugene Tinker
  • Publication number: 20170011005
    Abstract: A pipelined decimation in frequency FFT butterfly method, and an apparatus to perform this method comprising: a data memory with at least one read port and one write port; an add/subtract unit receiving data from the memory; a multiply/accumulate unit receiving data from the add/subtract unit; a source of coefficients, from logic gates or a coefficient memory, to supply FFT twiddle factors to the multiply/accumulate unit; a shifter receiving data from at least one of the add/subtract unit and the multiply/accumulate unit, the shifter supplying data to the write port of the data memory; wherein the apparatus performs these calculations in four cycles of the add/subtract unit and in four cycles of the multiply/accumulate unit, using complex arithmetic.
    Type: Application
    Filed: March 3, 2016
    Publication date: January 12, 2017
    Applicant: Tempo Semiconductor, Inc.
    Inventor: Darrell Eugene Tinker
  • Publication number: 20170011006
    Abstract: A method and apparatus for processing data are provided. The processor includes an input buffer, a data extractor, a multiplier, and an adder. The input buffer receives data and stores the data. The data extractor extracts kernel data corresponding to a kernel in the data from the input buffer. The multiplier multiplies the extracted kernel data by a convolution coefficient. The adder calculates a sum of multiplication results from the multiplier.
    Type: Application
    Filed: January 7, 2016
    Publication date: January 12, 2017
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Moradi SABER, Jun Haeng LEE, Eric Hyunsurk RYU, Keun Joo PARK
  • Publication number: 20170011007
    Abstract: A land battle process evaluation method and system thereof are provided. The land battle result evaluation method includes a flow network for presenting the land battle process problem. The flow network includes a plurality of land nodes and a plurality of connection paths for connecting the plurality of land nodes. A source node and a sink node may be disposed in the plurality of land nodes. A plurality of attack paths are formed by a combination of the plurality of connection paths from the source node to the sink node. The actual process of the land battle is simulated by the flow network and the probability of the various attack plans are obtained, such that the practical command is provided by reference to the evaluation result.
    Type: Application
    Filed: October 19, 2015
    Publication date: January 12, 2017
    Inventors: Wei-Chang YEH, Chih-Ming LAI
  • Publication number: 20170011008
    Abstract: A method providing an analytical technique introducing label information into an anomaly detection model. Effective utilization of label information is based on introducing the degree of similarity between samples. Assuming, for example, there is a degree of similarity between normally labeled samples and no similarity between normally labeled and abnormally labeled samples. Also each sensor value is generated by the linear sum of a latent variable and a coefficient vector specific to each sensor. However, the magnitude of observation noise is formulated to vary according to the label information for the sensor values, and set so that normal label unlabeled anomalously labeled. A graph Laplacian is created based on the degree of similarity between samples, and determines the optimal linear transformation matrix according to a gradient method. A optimal linear transformation matrix is used to calculate an anomaly score for each sensor in the test samples.
    Type: Application
    Filed: September 22, 2016
    Publication date: January 12, 2017
    Applicant: International Business Machines Corporation
    Inventors: Tsuyoshi Ide, Tetsuro Morimura, Bin Tong