Including Coprocessor Patents (Class 712/34)
  • Patent number: 10521238
    Abstract: The present application discloses a computing device that can provide a low-power, highly capable computing platform for computational imaging. The computing device can include one or more processing units, for example one or more vector processors and one or more hardware accelerators, an intelligent memory fabric, a peripheral device, and a power management module. The computing device can communicate with external devices, such as one or more image sensors, an accelerometer, a gyroscope, or any other suitable sensor devices.
    Type: Grant
    Filed: February 20, 2018
    Date of Patent: December 31, 2019
    Assignee: Movidius Limited
    Inventors: Brendan Barry, Richard Richmond, Fergal Connor, David Moloney
  • Patent number: 10430562
    Abstract: An information handling system includes a device, a controller, and a license manager subsystem. The controller is configured to determine whether the device has a license assigned and to extract a unique identification for the device in response to a request for information about the device. The license manager subsystem is configured to send the request for information about the device to the controller, to send the unique identification for the device to a license server as a request for the license for the device, to receive the license from the license server, and to assign the license to the device when the license is received.
    Type: Grant
    Filed: March 22, 2017
    Date of Patent: October 1, 2019
    Assignee: Dell Products, LP
    Inventors: Michael Brundridge, Sruthi Mothukupally, Darrell Rosser, Gang Liu, Jason C. Dale, Marshal F. Savage
  • Patent number: 10413156
    Abstract: An endoscopic device including: an image sensor configured to acquire image data; and a first processor comprising hardware, wherein the first processor is configured to: execute a first processing to process the image data, wherein the first processing involves a first processing load; acquire an instruction to perform a second processing to process the image data, wherein the second processing involves a second processing load different from the first processing load; and in response to acquiring the instruction, control a communication interface to output the image data to a second processor comprising hardware, wherein the second processor is provided separately from the endoscopic device, and wherein the second processor is configured to execute the second processing based on the image data.
    Type: Grant
    Filed: August 25, 2017
    Date of Patent: September 17, 2019
    Assignee: OLYMPUS CORPORATION
    Inventor: Yukiyoshi Yagi
  • Patent number: 10346195
    Abstract: A processor is described having logic circuitry of a general purpose CPU core to save multiple copies of context of a thread of the general purpose CPU core to prepare multiple micro-threads of a multi-threaded accelerator for execution to accelerate operations for the thread through parallel execution of the micro-threads.
    Type: Grant
    Filed: December 29, 2012
    Date of Patent: July 9, 2019
    Assignee: Intel Corporation
    Inventors: Oren Ben-Kiki, Ilan Pardo, Eliezer Weissmann, Robert Valentine, Yuval Yosef
  • Patent number: 10331997
    Abstract: A first input is processed via a first configuration of a neural network to produce a first output. The first configuration defines attributes of the neural network, such as connections between neural elements of the neural network. If the neural network requires a context switch to process a second input, a second configuration is applied to the neural network to change the attributes, and the second input is processed via the second configuration of the neural network to produce a second output.
    Type: Grant
    Filed: October 23, 2014
    Date of Patent: June 25, 2019
    Assignee: Seagate Technology LLC
    Inventors: Richard Esten Bohn, Peng Li, David Tetzlaff
  • Patent number: 10296392
    Abstract: A data processing system is described herein that includes two or more software-driven host components that collectively provide a software plane. The data processing system further includes two or more hardware acceleration components that collectively provide a hardware acceleration plane. The hardware acceleration plane implements one or more services, including at least one multi-component service. The multi-component service has plural parts, and is implemented on a collection of two or more hardware acceleration components, where each hardware acceleration component in the collection implements a corresponding part of the multi-component service. Each hardware acceleration component in the collection is configured to interact with other hardware acceleration components in the collection without involvement from any host component. A function parsing component is also described herein that determines a manner of parsing a function into the plural parts of the multi-component service.
    Type: Grant
    Filed: May 20, 2015
    Date of Patent: May 21, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Stephen F. Heil, Adrian M. Caulfield, Douglas C. Burger, Andrew R. Putnam, Eric S. Chung
  • Patent number: 10261911
    Abstract: Apparatuses and methods for performing computational workflow management are provided. An example apparatus may include processing circuitry. The processing circuitry may be configured to receive a computation resource reservation request for cache from a client to perform a computation, and decompose the computation into a workflow of tasks, generate a task label for each task result and the associated task, and compare a selected task label with previous task labels to determine if the selected task label matches one of the previous task labels. The processing circuitry may be further configured to, in response to determining that the selected task label matches one of the previous task labels, perform the computation using a task result that is associated with the matched one of the previous task labels that is currently stored in the cache for the task result of the selected task label.
    Type: Grant
    Filed: July 6, 2017
    Date of Patent: April 16, 2019
    Assignee: The Johns Hopkins University
    Inventors: Brian E. Ahr, Jonathan Z. Gehman, Khadir A. Griffith, Gary L. Jackson, II, William J. La Cholter, Anthony J. Castellani
  • Patent number: 10216693
    Abstract: A dataflow computer processor is teamed with a general computer processor so that program portions of an application program particularly suited to dataflow execution may be transferred to the dataflow processor during portions of the execution of the application program by the general computer processor. During this time the general computer processor may be placed in partial shutdown for energy conservation.
    Type: Grant
    Filed: July 30, 2015
    Date of Patent: February 26, 2019
    Assignee: Wisconsin Alumni Research Foundation
    Inventors: Anthony Nowatzki, Vinay Gangadhar, Karthikeyan Sankaralingam
  • Patent number: 10209890
    Abstract: A computing system includes a host processor, an access processor having a command port, a near memory accelerator, and a memory unit. The system is adapted to run a software program on the host processor and to offload an acceleration task of the software program to the near memory accelerator. The system is further adapted to provide, via the command port, a first communication path for direct communication between the software program and the near memory accelerator, and to provide, via the command port and the access processor, a second communication path for indirect communication between the software program and the near memory accelerator. A related computer implemented method and a related computer program product are also disclosed.
    Type: Grant
    Filed: March 28, 2017
    Date of Patent: February 19, 2019
    Assignee: International Business Machines Corporation
    Inventors: Angelo Haller, Harald Huels, Jan Van Lunteren, Joerg-Stephan Vogt
  • Patent number: 10209992
    Abstract: A method and system for branch prediction are provided herein. The method includes executing a program, wherein the program comprising multiple procedures, and setting bits in a taken branch history register to indicate whether a branch is taken or not taken during execution of instructions in the program. The method further includes the steps of calling a procedure in the program and overwriting, responsive to calling the procedure, the contents of the taken branch history register to a start address for the procedure.
    Type: Grant
    Filed: October 31, 2014
    Date of Patent: February 19, 2019
    Assignee: Avago Technologies International Sales Pte. Limited
    Inventors: Sophie Wilson, Geoffrey Barrett
  • Patent number: 10203878
    Abstract: A computing system includes a host processor, an access processor having a command port, a near memory accelerator, and a memory unit. The system is adapted to run a software program on the host processor and to offload an acceleration task of the software program to the near memory accelerator. The system is further adapted to provide, via the command port, a first communication path for direct communication between the software program and the near memory accelerator, and to provide, via the command port and the access processor, a second communication path for indirect communication between the software program and the near memory accelerator. A related computer implemented method and a related computer program product are also disclosed.
    Type: Grant
    Filed: December 30, 2017
    Date of Patent: February 12, 2019
    Assignee: International Business Machines Corporation
    Inventors: Angelo Haller, Harald Huels, Jan Van Lunteren, Joerg-Stephan Vogt
  • Patent number: 10120834
    Abstract: A signal processing device including: one or more vector processors configured to perform vector processing to a signal using a parameter, one or more scalar processors configured to perform scalar processing for generating the parameter, a first circuit coupled to the one or more vector processors and the one or more scalar processors and configured to transfer the parameter from the one or more scalar processors to the one or more vector processors, and a second circuit coupled to the one or more vector processors and another circuit that inputs the signal to the second circuit, and configured to transfer the signal among the one or more vector processors and the other circuit.
    Type: Grant
    Filed: May 30, 2014
    Date of Patent: November 6, 2018
    Assignee: FUJITSU LIMITED
    Inventor: Noboru Kobayashi
  • Patent number: 10120683
    Abstract: Supporting even instruction tag (‘ITAG’) requirements in a multi-slice processor with null internal operations (IOPs) includes: receiving an IOP with an even ITAG requirement; determining that the IOP is to be assigned an odd ITAG; and inserting a null IOP into an instruction lane ahead of the IOP, wherein the null IOP is assigned the odd ITAG, and the IOP is assigned an even ITAG.
    Type: Grant
    Filed: April 27, 2016
    Date of Patent: November 6, 2018
    Assignee: International Business Machines Corporation
    Inventors: Steven R. Carlough, Kurt A. Feiste, Paul M. Kennedy, Phillip G. Williams
  • Patent number: 9948571
    Abstract: Today's cloud software, especially cloud management software, faces a complex, distributed, cross platform environment with extremely diversified software components. Cloud Connection Pool (CCP) is a technique to obtain a connection in such an environment and is more complex than a traditional connection pool. CCP allows requesting components to establish connections to target components. CCP uses cloud mapping data that associates cloud components with each other and stores pool data that identifies connection pools for components (or “managing components”) that manage target components. In response to a request for a connection from a requesting component, the CCP determines a managing component that is associated with the requested target component and identifies (or creates) a connection pool that is associated with the managing component. The CCP then retrieves a connection from the connection pool and returns the connection to the requesting component.
    Type: Grant
    Filed: December 13, 2013
    Date of Patent: April 17, 2018
    Assignee: Oracle International Corporation
    Inventors: Hong Yuan, Tarun Jaiswal
  • Patent number: 9910675
    Abstract: The present application discloses a computing device that can provide a low-power, highly capable computing platform for computational imaging. The computing device can include one or more processing units, for example one or more vector processors and one or more hardware accelerators, an intelligent memory fabric, a peripheral device, and a power management module. The computing device can communicate with external devices, such as one or more image sensors, an accelerometer, a gyroscope, or any other suitable sensor devices.
    Type: Grant
    Filed: August 12, 2014
    Date of Patent: March 6, 2018
    Assignee: LINEAR ALGEBRA TECHNOLOGIES LIMITED
    Inventors: Brendan Barry, Richard Richmond, Fergal Connor, David Moloney
  • Patent number: 9886330
    Abstract: A data-processing system (DTS) includes a central hardware unit (CPU) and an additional hardware unit (HW), the central hardware unit (CPU) being adapted to execute a task by a processing thread (TM), and to trigger offloading of execution of a first part (P1a, P1b, P2) of the task to the additional hardware unit (HW); and wherein the additional hardware unit is adapted to call on functionalities of the central hardware unit (CPU), triggered by the first part, and the central hardware unit (CPU) executes a second part (P2) of the task forming a sub-part of the first part by a service processing thread (TS).
    Type: Grant
    Filed: September 30, 2014
    Date of Patent: February 6, 2018
    Assignee: BULL
    Inventors: Sylvain Jeaugey, Zoltan Menyhart, Frederic Temporelli
  • Patent number: 9858188
    Abstract: Statistical data is used to enable or disable snooping on a bus of a processor. A command is received via a first bus or a second bus communicably coupling processor cores and caches of chiplets on the processor. Cache logic on a chiplet determines whether or not a local cache on the chiplet can satisfy a request for data specified in the command. In response to determining that the local cache can satisfy the request for data, the cache logic updates statistical data maintained on the chiplet. The statistical data indicates a probability that the local cache can satisfy a future request for data. Based at least in part on the statistical data, the cache logic determines whether to enable or disable snooping on the second bus by the local cache.
    Type: Grant
    Filed: June 8, 2015
    Date of Patent: January 2, 2018
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Hien M. Le, Hugh Shen, Derek E. Williams, Phillip G. Williams
  • Patent number: 9817790
    Abstract: A multi-processor having a plurality of data processing units and memory units has a bus system that selectively interconnects the processing units and the memory units.
    Type: Grant
    Filed: February 24, 2016
    Date of Patent: November 14, 2017
    Assignee: PACT XPP TECHNOLOGIES AG
    Inventor: Martin Vorbach
  • Patent number: 9792120
    Abstract: Embodiments relate to prefetching data on a chip having a scout core and a parent core coupled to the scout core. The method includes determining that a program executed by the parent core requires content stored in a location remote from the parent core. The method includes sending a fetch table address determined by the parent core to the scout core. The method includes accessing a fetch table that is indicated by the fetch table address by the scout core. The fetch table indicates how many of pieces of content are to be fetched by the scout core and a location of the pieces of content. The method includes based on the fetch table indicating, fetching the pieces of content by the scout core. The method includes returning the fetched pieces of content to the parent core.
    Type: Grant
    Filed: March 5, 2013
    Date of Patent: October 17, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Brian R. Prasky, Fadi Y. Busaba, Steven R. Carlough, Christopher A. Krygowski, Chung-lung K. Shum
  • Patent number: 9665539
    Abstract: Solving computational problems may include generating a logic circuit representation of the computational problem, encoding the logic circuit representation as a discrete optimization problem, and solving the discrete optimization problem using a quantum processor. Output(s) of the logic circuit representation may be clamped such that the solving involves effectively executing the logic circuit representation in reverse to determine input(s) that corresponds to the clamped output(s). The representation may be of a Boolean logic circuit. The discrete optimization problem may be composed of a set of miniature optimization problems, where each miniature optimization problem encodes a respective logic gate from the logic circuit representation. A quantum processor may include multiple sets of qubits, each set coupled to respective annealing signal lines such that dynamic evolution of each set of qubits is controlled independently from the dynamic evolutions of the other sets of qubits.
    Type: Grant
    Filed: January 30, 2017
    Date of Patent: May 30, 2017
    Assignee: D-Wave Systems Inc.
    Inventors: William G. Macready, Geordie Rose, Thomas F.W. Mahon, Peter Love, Marshall Drew-Brook
  • Patent number: 9619404
    Abstract: Methods, apparatus and computer program products implement embodiments of the present invention that include defining, in a storage system including receiving, by a processor, metadata describing a first cache configured as a master cache having non-destaged data, and defining, using the received metadata, a second cache configured as a backup cache for the master cache. Subsequent to defining the second cache, the non-destaged data is retrieved from the first cache, and the non-destaged data is stored to the second cache.
    Type: Grant
    Filed: April 16, 2013
    Date of Patent: April 11, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: David D. Chambliss, Ehood Garmiza, Leah Shalev
  • Patent number: 9495308
    Abstract: A method is disclosed that includes writing data to predetermined physical addresses of a system memory, the data including metadata that identifies a processing type; configuring a processor module to include the predetermined physical addresses, the processor module being physically connected to the memory bus by a memory module connection; and processing the write data according to the processing type with an offload processor mounted on the processor module.
    Type: Grant
    Filed: May 22, 2013
    Date of Patent: November 15, 2016
    Assignee: Xockets, Inc.
    Inventor: Parin Bhadrik Dalal
  • Patent number: 9495302
    Abstract: A processing sub-system is configured to execute a program using a set of virtual memory addresses to reference memory locations for storage of variables of the program. A programmable logic sub-system is configured to implement a set of I/O circuits specified in a configuration data stream, each of the I/O circuits having a respective ID and configured to access one of the variables. A memory management circuit is configured to map the virtual memory addresses to physical memory addresses of a memory and map IDs to the physical address used to store the corresponding variables. A TLB is configured to receive a memory access request, from the I/O circuits, each request indicating an ID and provide, to the memory, a memory access request indicating the physical memory address that is mapped to the ID.
    Type: Grant
    Filed: August 18, 2014
    Date of Patent: November 15, 2016
    Assignee: XILINX, INC.
    Inventor: Sagheer Ahmad
  • Patent number: 9489992
    Abstract: A semiconductor device includes a pipe latch suitable for sequentially latching data in response to a pipe input control signal and sequentially outputting data in response to a pipe output control signal, a pipe latch control unit suitable for generating the pipe input/output control signals in response to a command signal and latency information, and resetting the pipe input/output control signals in response to a pipe reset signal, and an error detection unit suitable for receiving the pipe input control signal and the pipe output control signal, detecting a latency error, and generating the pipe reset signal.
    Type: Grant
    Filed: February 8, 2016
    Date of Patent: November 8, 2016
    Assignee: SK Hynix Inc.
    Inventor: Kie-Bong Ku
  • Patent number: 9454389
    Abstract: An operating system provides instructions for execution by plural hardware threads of a multithreaded core of a processor, the plural hardware threads appearing as separate logical processors to the operating system. An abstraction layer converts respective identifiers of the plural hardware threads to a core identifier representing the core. The abstraction layer presents the core identifier to a user application to hide the plural hardware threads from the user application, and to present the core as a single-threaded core to the user application.
    Type: Grant
    Filed: February 25, 2015
    Date of Patent: September 27, 2016
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Scott J. Norton, Hyun Kim
  • Patent number: 9411638
    Abstract: A method, system and computer-usable medium are disclosed for startup page fault management improves application startup performance by assigning startup tasks to a hardware thread 0 across plural processing cores in a simultaneous multithreading environment to provide more rapid processing of processor bound page faults. I/O bound page faults are flagged to associated with predetermined cache locations to improve data and text first reference page-in I/O response.
    Type: Grant
    Filed: December 19, 2013
    Date of Patent: August 9, 2016
    Assignee: International Business Machines Corporation
    Inventors: Vishal C. Aslot, Adekunle Bello, Gunisha Madan
  • Patent number: 9372797
    Abstract: Statistical data is used to enable or disable snooping on a bus of a processor. A command is received via a first bus or a second bus communicably coupling processor cores and caches of chiplets on the processor. Cache logic on a chiplet determines whether or not a local cache on the chiplet can satisfy a request for data specified in the command. In response to determining that the local cache can satisfy the request for data, the cache logic updates statistical data maintained on the chiplet. The statistical data indicates a probability that the local cache can satisfy a future request for data. Based at least in part on the statistical data, the cache logic determines whether to enable or disable snooping on the second bus by the local cache.
    Type: Grant
    Filed: February 10, 2014
    Date of Patent: June 21, 2016
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Hien M. Le, Hugh Shen, Derek E. Williams, Phillip G. Williams
  • Patent number: 9367535
    Abstract: Formulas in dashboards can be executed at a client executing web technologies such as HTML5 and JavaScript. The formulas specified by a spreadsheet file are transformed into a pre-defined notation format and then recursively evaluated. Related apparatus, systems, techniques and articles are also described.
    Type: Grant
    Filed: November 29, 2012
    Date of Patent: June 14, 2016
    Assignee: BUSINESS OBJECTS SOFTWARE, LTD.
    Inventors: Jason Bedard, Viren Kumar
  • Patent number: 9361116
    Abstract: An apparatus and method are described for providing low-latency invocation of accelerators. For example, a processor according to one embodiment comprises: a command register for storing command data identifying a command to be executed; a result register to store a result of the command or data indicating a reason why the command could not be executed; execution logic to execute a plurality of instructions including an accelerator invocation instruction to invoke one or more accelerator commands; and one or more accelerators to read the command data from the command register and responsively attempt to execute the command identified by the command data.
    Type: Grant
    Filed: December 28, 2012
    Date of Patent: June 7, 2016
    Assignee: INTEL CORPORATION
    Inventors: Oren Ben-Kiki, Ilan Pardo, Robert Valentine, Eliezer Weissmann, Dror Markovich, Yuval Yosef
  • Patent number: 9348638
    Abstract: A system can include a host processor connected to memory via a system memory bus; and at least one offload processor module, including at least one offload processor mounted on the offload processor module, and configured to execute operations on data received over the system memory bus, and to output context data to memory, and read context data from the memory, and hardware scheduling logic mounted on the module and configured to control operations of the at least one offload processor.
    Type: Grant
    Filed: June 8, 2013
    Date of Patent: May 24, 2016
    Assignee: Xockets, Inc.
    Inventors: Parin Bhadrik Dalal, Stephen Paul Belair
  • Patent number: 9342304
    Abstract: Embodiments of a system and a method in which a processor may execute instructions that cause the processor to receive an input vector and a control vector are disclosed. The executed instructions may also cause the processor to perform a fixed-value addition operation dependent upon the input vector and the control vector.
    Type: Grant
    Filed: September 27, 2012
    Date of Patent: May 17, 2016
    Assignee: Apple Inc.
    Inventor: Jeffry E. Gonion
  • Patent number: 9310863
    Abstract: The present invention provides a multi-purpose power controller and application specific standard product (ASSP) with improved block unification, reduced size and power, boot strapping, and power management. A multi-purpose field programmable non-volatile system power controller and ASSP initializing block may be embedded in a processor, such as a central processing unit (CPU), graphics processing unit (GPU), accelerated processing unit (APU), or other chipset. This controller and initializing block may be a configurable, while maintaining specialization, hardware block. This block may be implemented as a complex mid-size complex programmable logic devices (CPLD) or as a simple cascaded programmable logic array block, such as being the equivalent of a few hundred logic gates, for example.
    Type: Grant
    Filed: September 12, 2012
    Date of Patent: April 12, 2016
    Assignee: ATI Technologies ULC
    Inventors: Behrooz Karimian-Kakolaki, Darlington C. Opara
  • Patent number: 9304774
    Abstract: Apparatus and methods provide early access of instructions. A fetch queue is coupled to an instruction cache and configured to store a mix of processor instructions for a first processor and coprocessor instructions for a second processor. A coprocessor instruction selector is coupled to the fetch queue and configured to copy coprocessor instructions from the fetch queue. A queue is coupled to the coprocessor instruction selector and from which coprocessor instructions are accessed for execution before the coprocessor instruction is issued to the first processor. Execution of the copied coprocessor instruction is started in the coprocessor before the coprocessor instruction is issued to a processor. The execution of the copied coprocessor instruction is completed based on information received from the processor after the coprocessor instruction has been issued to the processor.
    Type: Grant
    Filed: February 1, 2012
    Date of Patent: April 5, 2016
    Assignee: QUALCOMM Incorporated
    Inventors: Kenneth Alan Dockser, Yusuf Cagatay Tekmen
  • Patent number: 9287855
    Abstract: A semiconductor device includes a pipe latch suitable for sequentially latching data in response to a pipe input control signal and sequentially outputting data in response to a pipe output control signal, a pipe latch control unit suitable for generating the pipe input/output control signals in response to a command signal and latency information, and resetting the pipe input/output control signals in response to a pipe reset signal, and an error detection unit suitable for receiving the pipe input control signal and the pipe output control signal, detecting a latency error, and generating the pipe reset signal.
    Type: Grant
    Filed: December 15, 2013
    Date of Patent: March 15, 2016
    Assignee: SK Hynix Inc.
    Inventor: Kie-Bong Ku
  • Patent number: 9286084
    Abstract: Adaptive hardware reconfiguration of configurable co-processor cores for hardware optimization of functionality blocks based on use case prediction, and related methods, circuits, and computer-readable media are disclosed. In one embodiment, an indication of one or more applications for possible execution is received. Execution probabilities for respective ones of the one or more applications are received. One or more mappings of the one or more applications to one or more functionality blocks is accessed, and a net benefit of hardware reconfiguration of one or more configurable co-processor cores of a multicore central processing unit for the one or more functionality blocks is calculated based on the execution probabilities and the mappings. An optimal hardware reconfiguration is determined based on a current hardware configuration and the calculated net benefit. The configurable co-processor cores are reconfigured based on the optimal hardware reconfiguration.
    Type: Grant
    Filed: December 30, 2013
    Date of Patent: March 15, 2016
    Assignee: QUALCOMM Incorporated
    Inventors: Kishore Yalamanchili, Yogeshwar Narayanan Nagaraj, Rashmi Keshava Iyengar, Dilip Krishnaswamy, Rodolfo Giacomo Beraha
  • Patent number: 9250954
    Abstract: A system can include at least one offload processor having a data cache, the offload processor including a slave interface configured to receive write data and provide read data over a memory bus; an offload processor module including context memory and a bus controller connected to the slave interface; and logic coupled to the offload processor and context memory and configured to detect predetermined write operations over the memory bus; wherein the offload processor is configured to execute operations on data received over the memory bus, and to output context data to the context memory, and read context data from the context memory.
    Type: Grant
    Filed: June 8, 2013
    Date of Patent: February 2, 2016
    Assignee: Xockets, Inc.
    Inventors: Parin Bhadrik Dalal, Stephen Paul Belair
  • Patent number: 9223983
    Abstract: Technologies for improving platform initialization on a computing device include beginning initialization of a platform of the computing device using a basic input/output system (BIOS) of the computing device. A security co-processor driver module adds a security co-processor command to a command list when a security processor command is received from the BIOS module. The computing device establishes a periodic interrupt of the initialization of the platform to query the security co-processor regarding the availability of a response to a previously submitted security co-processor command, forward any responses received by the security co-processor driver module to the BIOS module, and submit the next security co-processor command in the command list to the security co-processor.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: December 29, 2015
    Assignee: Intel Corporation
    Inventors: Guo Dong, Jiewen Yao, Vincent J. Zimmer, Michael A. Rothman
  • Patent number: 9163963
    Abstract: A method is disclosed including receiving with a controller at a destination node signal samples and associated sampling time indications. The signal samples and the associated sampling time indications are received from a source node via a mesh network. The signal samples are delivered with sampling time indications generated at the source node to form a series of signals corresponding to one or more characteristic(s) related to electricity supplied to one or more electrical devices from a power source. The method also includes applying a time domain convolution procedure to the received signal in the time domain that is uniformly sampled. The weighting of sample values in time domain convolution procedure is determined at least partially based on information indicative of the statistical behavior of a corresponding realized sample process.
    Type: Grant
    Filed: September 30, 2013
    Date of Patent: October 20, 2015
    Assignee: UTILIDATA, INC.
    Inventor: David Gordon Bell
  • Patent number: 9135180
    Abstract: Embodiments relate to a method, system, and computer program product for prefetching data on a chip. The chip has at least one scout core, multiple parent cores that cooperate together to execute various tasks, and a shared cache that is common between the scout core and the multiple parent cores. An aspect of the embodiments includes monitoring the multiple parent cores by the at least one scout core through the shared cache for a shared cache access occurring in a base parent core. The method includes saving a fetch address by the at least one scout core based on the shared cache access occurring. The fetch address indicates a location of a specific line of cache requested by the base parent core.
    Type: Grant
    Filed: March 5, 2013
    Date of Patent: September 15, 2015
    Assignee: International Business Machines Corporation
    Inventors: Fadi Y. Busaba, Steven R. Carlough, Christopher A. Krygowski, Brian R. Prasky, Chung-lung K. Shum
  • Patent number: 9128851
    Abstract: Embodiments relate to a method and computer program product for prefetching data on a chip. The chip has at least one scout core, multiple parent cores that cooperate together to execute various tasks, and a shared cache that is common between the scout core and the multiple parent cores. An aspect of the embodiments includes monitoring the multiple parent cores by the at least one scout core through the shared cache for a shared cache access occurring in a base parent core. The method includes saving a fetch address by the at least one scout core based on the shared cache access occurring. The fetch address indicates a location of a specific line of cache requested by the base parent core.
    Type: Grant
    Filed: September 30, 2014
    Date of Patent: September 8, 2015
    Assignee: International Business Machines Corporation
    Inventors: Fadi Y. Busaba, Steven R. Carlough, Christopher A. Krygowski, Brian R. Prasky, Chung-lung K. Shum
  • Patent number: 9128852
    Abstract: Embodiments of the invention relate to prefetching data on a chip having at least one scout core, at least one parent core, and a shared cache that is common between the at least one scout core and the at least one parent core. A prefetch code is executed by the scout core for monitoring the parent core. The prefetch code executes independently from the parent core. The scout core determines that at least one specified data pattern has occurred in the parent core based on monitoring the parent core. A prefetch request is sent from the scout core to the shared cache. The prefetch request is sent based on the at least one specified pattern being detected by the scout core. A data set indicated by the prefetch request is sent to the parent core by the shared cache.
    Type: Grant
    Filed: September 30, 2014
    Date of Patent: September 8, 2015
    Assignee: International Business Machines Corporation
    Inventors: Brian R. Prasky, Fadi Y. Busaba, Steven R. Carlough, Christopher A. Krygowski, Chung-Iung K. Shum
  • Patent number: 9116816
    Abstract: Embodiments of the invention relate to prefetching data on a chip having at least one scout core, at least one parent core, and a shared cache that is common between the at least one scout core and the at least one parent core. A prefetch code is executed by the scout core for monitoring the parent core. The prefetch code executes independently from the parent core. The scout core determines that at least one specified data pattern has occurred in the parent core based on monitoring the parent core. A prefetch request is sent from the scout core to the shared cache. The prefetch request is sent based on the at least one specified pattern being detected by the scout core. A data set indicated by the prefetch request is sent to the parent core by the shared cache.
    Type: Grant
    Filed: March 5, 2013
    Date of Patent: August 25, 2015
    Assignee: International Business Machines Corporation
    Inventors: Brian R. Prasky, Fadi Y. Busaba, Steven R. Carlough, Christopher A. Krygowski, Chung-lung K. Shum
  • Patent number: 9099194
    Abstract: A memory component includes a memory core comprising dynamic random access memory (DRAM) storage cells and a first circuit to receive external commands. The external commands include a read command that specifies transmitting data accessed from the memory core. The memory component also includes a second circuit to transmit data onto an external bus in response to a read command and pattern register circuitry operable during calibration to provide at least a first data pattern and a second data pattern. During the calibration, a selected one of the first data pattern and the second data pattern is transmitted by the second circuit onto the external bus in response to a read command received during the calibration. Further, at least one of the first and second data patterns is written to the pattern register circuitry in response to a write command received during the calibration.
    Type: Grant
    Filed: August 14, 2013
    Date of Patent: August 4, 2015
    Assignee: RAMBUS INC.
    Inventors: Craig E. Hampel, Richard E. Perego, Stefanos Sidiropoulos, Ely K. Tsern, Frederick A. Ware
  • Patent number: 9032415
    Abstract: A method for activating processor cores within a computer system is disclosed. Initially, a value representing a number of processor cores to be enabled within the computer system is received. The computer system includes multiple processors, and each of the processors includes multiple processor cores. Next, a scale variable value representing a specific type of tasks to be optimized during an execution of the tasks within the computer system is received. From a pool of available processor cores within the computer system, a subset of processor cores can be selected for activation. The subset of processor cores is activated in order to achieve system optimization during an execution of the tasks.
    Type: Grant
    Filed: September 26, 2013
    Date of Patent: May 12, 2015
    Assignee: International Business Machines Corporation
    Inventors: Rijoy B. Lonappan, Shashikumar Mandya Krishnappa, Sethupathy R. Sivakumar, Venkatesh N. Sripathi Rao
  • Patent number: 9027029
    Abstract: A technique for activating processor cores within a computer system is disclosed. Initially, a value representing a number of processor cores to be enabled within the computer system is received. The computer system includes multiple processors, and each of the processors includes multiple processor cores. Next, a scale variable value representing a specific type of tasks to be optimized during an execution of the tasks within the computer system is received. From a pool of available processor cores within the computer system, a subset of processor cores can be selected for activation. The subset of processor cores is activated in order to achieve system optimization during an execution of the tasks.
    Type: Grant
    Filed: March 28, 2013
    Date of Patent: May 5, 2015
    Assignee: International Business Machines Corporation
    Inventors: Rijoy B. Lonappan, Shashikumar Mandya Krishnappa, Sethupathy R. Sivakumar, Venkatesh N. Sripathi Rao
  • Patent number: 9003166
    Abstract: System and method for generating hardware accelerators and processor offloads. System for hardware acceleration. System and method for implementing an asynchronous offload. Method of automatically creating a hardware accelerator. Computerized method for automatically creating a test harness for a hardware accelerator from a software program. System and method for interconnecting hardware accelerators and processors. System and method for interconnecting a processor and a hardware accelerator. Computer implemented method of generating a hardware circuit logic block design for a hardware accelerator automatically from software. Computer program and computer program product stored on tangible media implementing the methods and procedures of the invention.
    Type: Grant
    Filed: January 25, 2012
    Date of Patent: April 7, 2015
    Assignee: Synopsys, Inc.
    Inventors: Navendu Sinha, William Charles Jordan, Bryon Irwin Moyer, Stephen John Joseph Fricke, Roberto Attias, Akash Renukadas Deshpande, Vineet Gupta, Shobhit Sonakiya
  • Patent number: 8976396
    Abstract: A print image processing system includes plural logical page interpretation units, a caching interpretation unit, and a print image data generation unit. The plural logical page interpretation units interpret different logical pages in print data in parallel to obtain interpretation results, and output the interpretation results. The caching interpretation unit interprets an element to be cached which is included in each of logical pages in the print data to obtain interpretation results, and stores the interpretation results in a cache unit. The print image data generation unit generates print image data of the logical pages using the interpretation results of the logical pages output from the logical page interpretation units and the interpretation results of the elements to be cached stored in the cache unit. The print image data generation unit supplies the generated print image data to a printer.
    Type: Grant
    Filed: September 5, 2013
    Date of Patent: March 10, 2015
    Assignee: Fuji Xerox Co., Ltd.
    Inventor: Michio Hayakawa
  • Patent number: 8972995
    Abstract: A method, apparatus, and system in which an integrated circuit comprises an initiator Intellectual Property (IP) core, a target IP core, an interconnect, and a tag and thread logic. The target IP core may include a memory coupled to the initiator IP core. Additionally, the interconnect can allow the integrated circuit to communicate transactions between one or more initiator Intellectual Property (IP) cores and one or more target IP cores coupled to the interconnect. A tag and thread logic can be configured to concurrently perform per-thread and per-tag memory access scheduling within a thread and across multiple threads such that the tag and thread logic manages tags and threads to allow for per-tag and per-thread scheduling of memory accesses requests from the initiator IP core out of order from an initial issue order of the memory accesses requests from the initiator IP core.
    Type: Grant
    Filed: August 6, 2010
    Date of Patent: March 3, 2015
    Assignee: Sonics, Inc.
    Inventors: Krishnan Srinivasan, Ruben Khazhakyan, Harutyan Aslanyan, Drew E. Wingard, Chien-Chun Chou
  • Patent number: 8972699
    Abstract: A multicore interface with dynamic task management capability and a task loading and offloading method thereof are provided. The method disposes a communication interface between a micro processor unit (MPU) and a digital signal processor (DSP) and dynamically manages tasks assigned by the MPU to the DSP. First, an idle processing unit of the DSP is searched, and then one of a plurality of threads of the task is assigned to the processing unit. Finally, the processing unit is activated to execute the thread. Accordingly, the communication efficiency of the multicore processor can be effectively increased while the hardware cost can be saved.
    Type: Grant
    Filed: April 22, 2008
    Date of Patent: March 3, 2015
    Assignee: Industrial Technology Research Institute
    Inventors: Tai-Ji Lin, Tien-Wei Hsieh, Yuan-Hua Chu, Shih-Hao Ou, Xiang-Sheng Deng, Chih-Wei Liu
  • Publication number: 20150046680
    Abstract: A method for dynamically reconfiguring one or more cores of a multi-core microprocessor comprising a plurality of cores and sideband communication wires, extrinsic to a system bus connected to a chipset, which facilitate non-system-bus inter-core communications. At least some of the cores are operable to be reconfigurably designated with or without master credentials for purposes of structuring sideband-based inter-core communications. The method includes determining an initial configuration of cores of the microprocessor, which configuration designates at least one core, but not all of the cores, as a master core, and reconfiguring the cores according to a modified configuration, which modified configuration removes a master designation from a core initially so designated, and assigns a master designation to a core not initially so designated. Each core is configured to conditionally drive a sideband communication wire to which it is connected based upon its designation, or lack thereof, as a master core.
    Type: Application
    Filed: October 24, 2014
    Publication date: February 12, 2015
    Inventors: G. GLENN HENRY, STEPHAN GASKINS