Interprogram Communication Using Shared Memory Patents (Class 719/312)
  • Patent number: 8516497
    Abstract: Method and system for process sharing in a medical imaging system are disclosed. A first application system is provided capable of processing data within a second application system. A process sharing system residing outside of the second application system is configured for enabling process sharing of the first application system within the second application system. The first application system comprises a front-end unit that is made operable within the second application system by a process launcher upon occurrence of an event to facilitate processing of data accessible from the second application system through communication with the process sharing system via a pre-defined interface. The process launcher is generated by the process sharing system and deployed on the second application system.
    Type: Grant
    Filed: September 12, 2008
    Date of Patent: August 20, 2013
    Assignee: Edda Technology, Inc.
    Inventors: Feng Ma, Qian Jianzhong, Guo-Qing Wei, Cheng-Chung Liang, Xiaolan Zeng, Li Fan, Hong Chen
  • Patent number: 8505027
    Abstract: The present invention is directed to a system and method for selectively sharing data between different implementations of the same software program in a network environment. The program implementations are otherwise independent with each executing its own private memory space in a single computer or on multiple computers in the network. The present invention enables a first implementation of a program to borrow or utilize data collected, derived or otherwise utilized by a second implementation of the same program.
    Type: Grant
    Filed: December 22, 2005
    Date of Patent: August 6, 2013
    Assignee: Oracle OTC Subsidiary LLC
    Inventors: Douglas K. Warner, J. Neal Richter, Stephen D. Durbin
  • Publication number: 20130191848
    Abstract: A system for distributed function execution, the system includes a host in operable communication with an accelerator. The system is configured to perform a method including processing an application by the host and distributing at least a portion of the application to the accelerator for execution. The method also includes instructing the accelerator to create a buffer on the accelerator, instructing the accelerator to execute the portion of the application, wherein the accelerator writes data to the buffer and instructing the accelerator to transmit the data in the buffer to the host before the application requests the data in the buffer. The accelerator aggregates the data in the buffer before transmitting the data to the host based upon one or more runtime conditions in the host.
    Type: Application
    Filed: January 25, 2012
    Publication date: July 25, 2013
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: David G. Chapman, Rajaram B. Krishnamurthy, Carl J. Parris, Donald W. Schmidt, Benjamin P. Segal
  • Publication number: 20130191849
    Abstract: A method includes processing an application by a host including one or more processors and distributing at least a portion of the application to an accelerator for execution. The method includes instructing the accelerator to create a buffer on the accelerator and instructing the accelerator to execute the portion of the application, wherein the accelerator writes data to the buffer. The method also includes instructing the accelerator to transmit the data in the buffer to the host before the application requests the data in the buffer. The accelerator aggregates the data in the buffer before transmitting the data to the host based upon one or more runtime conditions in the host.
    Type: Application
    Filed: October 31, 2012
    Publication date: July 25, 2013
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: International Business Machines Corporation
  • Patent number: 8495654
    Abstract: Intranode data communications in a parallel computer that includes compute nodes configured to execute processes, where the data communications include: allocating, upon initialization of a first process of a compute node, a region of shared memory; establishing, by the first process, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; sending, to a second process on the same compute node, a data communications message without determining whether the second process has been initialized, including storing the data communications message in the message buffer of the second process; and upon initialization of the second process: retrieving, by the second process, a pointer to the second process's message buffer; and retrieving, by the second process from the second process's message buffer in dependence upon the pointer, the data communications message sent by the first process.
    Type: Grant
    Filed: November 7, 2011
    Date of Patent: July 23, 2013
    Assignee: International Business Machines Corporation
    Inventors: Charles J. Archer, Michael A. Blocksome, Douglas R. Miller, Joseph D. Ratterman, Brian E. Smith
  • Patent number: 8490110
    Abstract: Data processing on a network on chip (‘NOC’) that includes integrated processor (‘IP’) blocks, routers, memory communications controllers, and network interface controllers, with each IP block adapted to a router through a memory communications controller and a network interface controller, where each memory communications controller controlling communications between an IP block and memory, each network interface controller controlling inter-IP block communications through routers, with each IP block also adapted to the network by a low latency, high bandwidth application messaging interconnect comprising an inbox and an outbox.
    Type: Grant
    Filed: February 15, 2008
    Date of Patent: July 16, 2013
    Assignee: International Business Machines Corporation
    Inventors: Russell D. Hoover, Jon K. Kriegel, Eric O. Mejdrich, Robert A. Shearer
  • Patent number: 8490111
    Abstract: The invention provides hardware logic based techniques for a set of processing tasks of a software program to efficiently communicate with each other while running in parallel on an array of processing cores of a multi-core data processing system dynamically shared among a group of software programs. These inter-task communication techniques comprise, by one or more task of the set, writing their inter-task communication information to a memory segment of other tasks of the set at the system memories, as well as reading inter-task communication information from their own segments at the system memories. The invention facilitates efficient inter-task communication on a multi-core fabric, without any of the communications tasks needing to know whether and at which core in the fabric any other task is executing at any given time. The invention thus enables flexibly and efficiently running any task of any program at any core of the fabric.
    Type: Grant
    Filed: October 10, 2011
    Date of Patent: July 16, 2013
    Assignee: Throughputer, Inc.
    Inventor: Mark Henrik Sandstrom
  • Publication number: 20130179898
    Abstract: A system comprises a first storage resource, a second storage resource, a hosted application, a proxy engine, and a proxy interface. The first storage resource stores first data and uses a first program interface for communicating the first data. The second storage resource stores second data and uses a second program interface for communicating the second data. The hosted application uses application data, the first data and/or the second data including the application data. The proxy engine directs application data requests by the hosted application to the first storage resource or to the second storage resource. The proxy interface uses the first program interface to communicate with the first storage device and the second program interface to communicate with the second storage device to respond to the application data requests.
    Type: Application
    Filed: February 28, 2013
    Publication date: July 11, 2013
    Applicant: Zettar, Inc.
    Inventor: Zettar, Inc.
  • Patent number: 8479218
    Abstract: Various embodiments of a system and method for automatically arranging or positioning objects in a block diagram of a graphical program are described. A graphical programming development environment or other software application may be operable to automatically analyze a block diagram of a graphical program, e.g., in order to determine objects present in the block diagram, as well as their initial positions within the block diagram. The graphical programming development environment may then automatically re-position various ones of the objects in the block diagram. In various embodiments, the objects may be re-positioned so as to better organize the block diagram or enable a user to more easily view or understand the block diagram.
    Type: Grant
    Filed: July 9, 2007
    Date of Patent: July 2, 2013
    Assignee: National Instruments Corporation
    Inventors: Anand Kodaganur, Arjun J. Singri, Ashwin Prasad, Karthik S. Murthy, Craig Smith, Bharath Dev
  • Patent number: 8479202
    Abstract: A method and system for self-managing an application program in a computing environment, is provided. One implementation involves spawning a primary application for execution in the computing environment; the primary application monitoring status of the primary application and the computing environment resources while executing; and upon detecting a first status threshold, the primary application spawning a secondary application in the computing environment, wherein the secondary application comprises a lower functionality version of the primary application, and the primary application terminating.
    Type: Grant
    Filed: February 6, 2009
    Date of Patent: July 2, 2013
    Assignee: International Business Machines Corporation
    Inventors: Natalie S. Hogan, Andrew J. E. Menadue, Thomas van der Veen
  • Publication number: 20130167155
    Abstract: A server supporting the implementation of virtual machines includes a local memory used for caching, such as a solid state device drive. During I/O intensive processes, such as a boot storm, a “content aware” cache filter component of the server first accesses a cache structure in a content cache device to determine whether data blocks have been stored in the cache structure prior to requesting the data blocks from a networked disk array via a standard I/O stack of the hypervisor.
    Type: Application
    Filed: November 13, 2012
    Publication date: June 27, 2013
    Applicant: VMWARE, INC.
    Inventor: VMware, Inc.
  • Patent number: 8473906
    Abstract: The present invention relates generally to computer programming, and more particularly to, systems and methods for parallel distributed programming. Generally, a parallel distributed program is configured to operate across multiple processors and multiple memories. In one aspect of the invention, a parallel distributed program includes a distributed shared variable located across the multiple memories and distributed programs capable of operating across multiple processors.
    Type: Grant
    Filed: April 1, 2010
    Date of Patent: June 25, 2013
    Assignee: The Regents of the University of California
    Inventors: Lei Pan, Lubomir R. Bic, Michael B. Dillencourt
  • Patent number: 8473966
    Abstract: An inter-processor communication approach is applicable to a message passing pattern called iterative exchange. In such patterns, two processors exchange messages, then perform a computation, and then this process is repeated. If two sets of send and receive buffers are used, then it is possible to guarantee that a receive buffer on the receiver's side is always available to receive the message. A message passing system controls which buffers are used for sending and receiving. These buffers are registered beforehand, thereby avoiding repeated registration at the time messages are sent. The sender is initially informed of all the possible receive buffers that the receiver will use, and the sender then uses these receive buffers alternately. Examples of this approach can avoid the use of multiple-step rendezvous protocols, memory copies, and memory registrations when a message needs to be sent.
    Type: Grant
    Filed: October 1, 2007
    Date of Patent: June 25, 2013
    Assignee: D.E. Shaw Research, LLC
    Inventor: Edmond Chow
  • Patent number: 8468541
    Abstract: A file transfer manager for managing file transfers using the sendfile operation. The sendfile operation is optimized to minimize system resources necessary to complete the file transfer. The sendfile decreases resources required during idle times by sharing a thread with other idle sendfile operations. The sendfile operation is then assigned a worker thread when further data is ready to be transfered.
    Type: Grant
    Filed: August 28, 2007
    Date of Patent: June 18, 2013
    Assignee: Red Hat, Inc.
    Inventor: Mladen Turk
  • Patent number: 8468543
    Abstract: A computer system includes a DRM client system in which a plurality of DRM clients are installed, comprising: a virtual OS managing unit that separates a kernel of an actual operating system installed in the DRM client system to generate and manage a virtual operating system; a branch process information managing unit that manages branch process information according to a type of a document that a user attempts to read; and an application program branching unit that analyzes the branch process information and executes DRM client agent for managing the DRM client in an actual OS region or a virtual OS region according to the type of a document that the user attempts to read to allow the user to read the document.
    Type: Grant
    Filed: January 25, 2008
    Date of Patent: June 18, 2013
    Assignee: Fasoo.Com.Co.Ltd.
    Inventors: Young Sik Ryu, Kyoung Ho Jeon
  • Patent number: 8458724
    Abstract: An automatic mutual exclusion computer programming system is disclosed which allows a programmer to produce concurrent programming code that is synchronized by default without the need to write any synchronization code. The programmer creates asynchronous methods which are not permitted make changes to shared memory that they cannot reverse, and can execute concurrently with other asynchronous methods. Changes to shared memory are committed if no other thread has accessed shared memory while the asynchronous method executed. Changes are reversed and the asynchronous method is re-executed if another thread has made changes to shared memory. The resulting program executes in a serialized order. A blocking system method is disclosed which causes the asynchronous method to re-execute until the blocking method's predicate results in an appropriate value. A yield system call is disclosed which divides asynchronous methods into atomic fragments.
    Type: Grant
    Filed: June 15, 2007
    Date of Patent: June 4, 2013
    Assignee: Microsoft Corporation
    Inventors: Andrew David Birrell, Michael Acheson Isard
  • Patent number: 8453159
    Abstract: A system and method for monitoring information events partitions sets of information and processing steps into one or more workspaces. The workspaces include sharable portable specifications for implementing event monitoring by a plurality of users or computer systems. Workspaces may be bindable computing resources to establish controls between the computing resources and the workspaces.
    Type: Grant
    Filed: May 30, 2008
    Date of Patent: May 28, 2013
    Assignee: Informatica Corporation
    Inventors: Michael S. Appelbaum, Jeffrey H. Garvett, Christopher Bradley, Karl Ginter, Michael Bartman
  • Patent number: 8453161
    Abstract: This disclosure describes a method and system that may enable fast, hardware-assisted, producer-consumer style communication of values between threads. The method, in one aspect, uses a dedicated hardware buffer as an intermediary storage for transferring values from registers in one thread to registers in another thread. The method may provide a generic, programmable solution that can transfer any subset of register values between threads in any given order, where the source and target registers may or may not be correlated. The method also may allow for determinate access times, since it completely bypasses the memory hierarchy. Also, the method is designed to be lightweight, focusing on communication, and keeping synchronization facilities orthogonal to the communication mechanism. It may be used by a helper thread that performs data prefetching for an application thread, for example, to initialize the upward-exposed reads in the address computation slice of the helper thread code.
    Type: Grant
    Filed: May 25, 2010
    Date of Patent: May 28, 2013
    Assignee: International Business Machines Corporation
    Inventors: Michael K. Gschwind, John K. O'Brien, Valentina Salapura, Zehra N. Sura
  • Patent number: 8453160
    Abstract: Methods and systems are provided for authorizing a command of an integrated modular environment in which a plurality of partitions control actions of a plurality of effectors is provided. A first identifier, a second identifier, and a third identifier are determined. The first identifier identifies a first partition of the plurality of partitions from which the command originated. The second identifier identifies a first effector of the plurality of effectors for which the command is intended. The third identifier identifies a second partition of the plurality of partitions that is responsible for controlling the first effector. The first identifier and the third identifier are compared to determine whether the first partition is the same as the second partition for authorization of the command.
    Type: Grant
    Filed: March 11, 2010
    Date of Patent: May 28, 2013
    Assignee: Honeywell International Inc.
    Inventors: Dean E. Sunderland, Terry J. Ahrendt, Tim Moore
  • Patent number: 8447580
    Abstract: Methods and systems for modeling a multiprocessor system in a graphical modeling environment are disclosed. The multiprocessor system may include multiple processing units that carry out one or more processes, such as programs and sets of instructions. Each of the processing units may be represented as a node at the top level of the model for the multiprocessor system. The nodes representing the processing units of the multiprocessor system may be interconnected to each other via a communication channel. The nodes may include at least one read element for reading data from the communication channel into the nodes. The node may also include at least one write element for writing data from the nodes into the communication channel. Each of the processing unit can communicate with other processing unit via the communication channel using the read and write elements.
    Type: Grant
    Filed: May 31, 2005
    Date of Patent: May 21, 2013
    Assignee: The MathWorks, Inc.
    Inventor: John Ciolfi
  • Patent number: 8448182
    Abstract: The present invention enables messaging service users to programmatically and administratively control the system's behavior in the event of an external resource failure. A destination pause/resume feature enables the user to “pause” and “resume” the production and consumption of messages, that are newly produced or produced as a result of “in flight work” completion, on a given destination or all the destinations hosted by a single messaging service server programmatically. This description is not intended to be a complete description of, or limit the scope of, the invention. Other features, aspects, and objects of the invention can be obtained from a review of the specification, the figures, and the claims.
    Type: Grant
    Filed: November 22, 2005
    Date of Patent: May 21, 2013
    Assignee: Oracle International Corporation
    Inventor: Kathiravan Sengodan
  • Publication number: 20130125135
    Abstract: Intranode data communications in a parallel computer that includes compute nodes configured to execute processes, where the data communications include: allocating, upon initialization of a first process of a compute node, a region of shared memory; establishing, by the first process, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; sending, to a second process on the same compute node, a data communications message without determining whether the second process has been initialized, including storing the data communications message in the message buffer of the second process; and upon initialization of the second process: retrieving, by the second process, a pointer to the second process's message buffer; and retrieving, by the second process from the second process's message buffer in dependence upon the pointer, the data communications message sent by the first process.
    Type: Application
    Filed: December 10, 2012
    Publication date: May 16, 2013
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: INTERNATIONAL BUSINESS MACHINES CORPORATION
  • Patent number: 8443375
    Abstract: A method for passing data from a first processing thread to a second processing thread, wherein the first processing thread produces data to be processed by the second processing thread. The data from the first processing thread may be inserted into objects that in turn are inserted into a queue of objects to be processed by the second thread. The queue may be a circular array, wherein the array includes a pointer to a head and a pointer to a tail, wherein only the first processing thread modifies the tail pointer and only the second processing thread modifies the head pointer.
    Type: Grant
    Filed: December 14, 2009
    Date of Patent: May 14, 2013
    Assignee: Verisign, Inc.
    Inventors: Roberto Rodrigues, Suresh Bhogavilli
  • Patent number: 8443376
    Abstract: Techniques for configuring a hypervisor scheduler to make use of cache topology of processors and physical memory distances between NUMA nodes when making scheduling decisions. In the same or other embodiments the hypervisor scheduler can be configured to optimize the scheduling of latency sensitive workloads. In the same or other embodiments a hypervisor can be configured to expose a virtual cache topology to a guest operating system running in a virtual machine.
    Type: Grant
    Filed: June 1, 2010
    Date of Patent: May 14, 2013
    Assignee: Microsoft Corporation
    Inventors: Aditya Bhandari, Dmitry Meshchaninov, Shuvabrata Ganguly
  • Publication number: 20130117761
    Abstract: Intranode data communications in a parallel computer that includes compute nodes configured to execute processes, where the data communications include: allocating, upon initialization of a first process of a compute node, a region of shared memory; establishing, by the first process, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; sending, to a second process on the same compute node, a data communications message without determining whether the second process has been initialized, including storing the data communications message in the message buffer of the second process; and upon initialization of the second process: retrieving, by the second process, a pointer to the second process's message buffer; and retrieving, by the second process from the second process's message buffer in dependence upon the pointer, the data communications message sent by the first process.
    Type: Application
    Filed: November 7, 2011
    Publication date: May 9, 2013
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Charles J. Archer, Michael A. Blocksome, Douglas R. Miller, Joseph D. Ratterman, Brian E. Smith
  • Patent number: 8438341
    Abstract: A method for unidirectional communication between tasks includes providing a first task having access to an amount of virtual memory, blocking a communication channel portion of said first task's virtual memory, such that the first task cannot access said portion, providing a second task, having access to an amount of virtual memory equivalent to the first task's virtual memory, wherein a communication channel portion of the second task's virtual memory corresponding to the blocked portion of the first task's virtual memory is marked as writable, transferring the communication channel memory of the second task to the first task, and unblocking the communication channel memory of the first task.
    Type: Grant
    Filed: June 16, 2010
    Date of Patent: May 7, 2013
    Assignee: International Business Machines Corporation
    Inventors: Ulrich A. Finkler, Steven N. Hirsch, Harold E. Reindel
  • Patent number: 8434093
    Abstract: A method of virtualizing an application to execute on a plurality of operating systems without installation. The method includes creating an input configuration file for each operating system. The templates each include a collection of configurations that were made by the application during installation on a computing device executing the operating system. The templates are combined into a single application template having a layer including the collection of configurations for each operating system. The collection of configurations includes files and registry entries. The collections also identifies and configures environmental variables, systems, and the like. Files in the collection of configurations and references to those files may be replaced with references to files stored on installation media. The application template is used to build an executable of the virtualized application.
    Type: Grant
    Filed: August 7, 2008
    Date of Patent: April 30, 2013
    Assignee: Code Systems Corporation
    Inventors: Stefan I. Larimore, C. Michael Murphey, Kenji C. Obata
  • Patent number: 8416235
    Abstract: A software application and an operating system may run on a computer, which includes a graphics card and a video display, where the graphics card is operable to render images to the video display, and the operating system includes a universal application programming interface (API) which supports hardware layers on graphics cards. The operating system may be operable to receive draw events via the universal API; determine what hardware layers are available on the graphics card, and what their parameters are; and respond to draw requests from the software application by rendering the draw requests selectively to any of the available hardware layers on the graphics card.
    Type: Grant
    Filed: December 14, 2011
    Date of Patent: April 9, 2013
    Assignee: QNX Software Systems Limited
    Inventors: Darrin Fry, Angela Lin, David Donohoe
  • Patent number: 8407717
    Abstract: The present invention relates to a parallel processing method for a dual operating system, comprising building a main operating system and a sub operating system on an operating system kernel; executing a first application program in the main operating system and executing a second application program in the sub operating system; the operating system kernel transmitting an instruction or command received from a piece of hardware to the first application program; the first application program converting the instruction or command into program codes to be executed by the second application program; the first application program transmitting the program codes to the second application program; the second application program executing the program codes and saving the executed result in a memory or a file system; the first application program reading the executed result of the second application program from the memory or the file system; and the first application program transmitting the read result to the operat
    Type: Grant
    Filed: March 11, 2010
    Date of Patent: March 26, 2013
    Assignees: Insyde Software Corporation, Acer Incorporated
    Inventor: Wen Chih Ho
  • Patent number: 8407723
    Abstract: A computing system and method is a specification of user-defined business logic is provided as JAVA program instructions (or another programming language) which does not natively provide for specification of full transactionality, to accomplish a fully transactional application, including executed managed objects. The managed objects are persisted in a shared memory of the computing system, such that a scope of the objects is global to the fully transactional application. Furthermore, a catalog of the managed object is maintained. A query interface is provided for querying the managed objects, in order to receive a query from an application, to process the catalog, and to provide a result indication of at least one of the managed objects back to the querying application. Thus, for example, the application may process the managed objects that are indicated in the query result.
    Type: Grant
    Filed: October 8, 2009
    Date of Patent: March 26, 2013
    Assignee: Tibco Software, Inc.
    Inventors: Otto Lind, Jonathon C. Pile, Ramiro Sarmiento, Daniel J. Sifter, David Stone, Xiguang Zang, Mark Phillips
  • Patent number: 8407716
    Abstract: Example apparatus and methods to access information associated with a process control system are disclosed. A disclosed example method involves receiving a first user-defined parameter name to reference a first datum value in a first data source. The first one of a plurality of data source interfaces is enabled to access the first datum value in the first data source. The example method also involves enabling referencing the first datum value in the first data source based on the first user-defined parameter name. In addition, data source interface software is then generated to access the first datum value in the first data source in response to receiving a first data access request including the first user-defined parameter name.
    Type: Grant
    Filed: May 31, 2007
    Date of Patent: March 26, 2013
    Assignee: Fisher-Rosemount Systems, Inc.
    Inventors: Mark J. Nixon, Terry Blevins, John Michael Lucas, Ken Beoughter
  • Patent number: 8397241
    Abstract: Embodiments of the invention provide language support for CPU-GPU platforms. In one embodiment, code can be flexibly executed on both the CPU and GPU. CPU code can offload a kernel to the GPU. That kernel may in turn call preexisting libraries on the CPU, or make other calls into CPU functions. This allows an application to be built without requiring the entire call chain to be recompiled. Additionally, in one embodiment data may be shared seamlessly between CPU and GPU. This includes sharing objects that may have virtual functions. Embodiments thus ensure the right virtual function gets invoked on the CPU or the GPU if a virtual function is called by either the CPU or GPU.
    Type: Grant
    Filed: December 30, 2008
    Date of Patent: March 12, 2013
    Assignee: Intel Corporation
    Inventors: Zhou Xiaocheng, Shoumeng Yan, Ying Gao, Hu Chen, Peinan Zhang, Mohan Rajagopalan, Avi Mendelson, Bratin Saha
  • Publication number: 20130061240
    Abstract: A computer system may comprise a computer platform and input-output devices. The computer platform may include a plurality of heterogeneous processors comprising a central processing unit (CPU) and a graphics processing unit) GPU, for example. The GPU may be coupled to a GPU compiler and a GPU linker/loader and the CPU may be coupled to a CPU compiler and a CPU linker/loader. The user may create a shared object in an object oriented language and the shared object may include virtual functions. The shared object may be fine grain partitioned between the heterogeneous processors. The GPU compiler may allocate the shared object to the CPU and may create a first and a second enabling path to allow the GPU to invoke virtual functions of the shared object. Thus, the shared object that may include virtual functions may be shared seamlessly between the CPU and the GPU.
    Type: Application
    Filed: October 30, 2009
    Publication date: March 7, 2013
    Inventors: Shoumeng Yan, Xiaocheng Zhou, Ying Gao, Mohan Rajagopalan, Rajiv Deodhar, David Putzolu, Clark Nelson, Milind Girkar, Robert Geva, Tiger Chen, Sai Luo, Stephen Junkins, Bratin Saha, Ravi Narayanaswamy, Patrick Xi
  • Publication number: 20130061241
    Abstract: Managing shared data objects to share data between computer processes, including a method for executing a plurality of independent processes on an application server, the processes including a first process and a second process. A shared memory utilized by the plurality of independent processes is provided. A single copy of the data and metadata are stored in the shared memory. The metadata includes an address of the data. The first process initiates the storing of the data in the shared memory. An address of the metadata is transferred from the first process to the second process to notify the second process about the data. The second process determines the address of the shared memory by reading the metadata. The data in the shared memory is accessed by the second process.
    Type: Application
    Filed: November 5, 2012
    Publication date: March 7, 2013
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: International Business Machines Corporation
  • Patent number: 8387067
    Abstract: A message tracking and verifying system for verifying the correctness of messages being passed may comprise a tracking module for tracking a request message and a verifying module for verifying a response message. The tracking module may be configured to store a calculated source address and a calculated response address range. The verifying module may be configured to obtain an actual source address from the response message and an actual response address range for the response message. The correctness of the response message is determined based on the comparison of the calculated source address with the actual source address and the comparison of the calculated response address range with the actual response address range.
    Type: Grant
    Filed: March 14, 2008
    Date of Patent: February 26, 2013
    Assignee: LSI Corporation
    Inventor: Babu H. Prakash
  • Patent number: 8386719
    Abstract: Provided are a method and apparatus for controlling a shared memory, and a method of accessing the shared memory. The apparatus includes a processing unit configured to process an application program, a user program unit configured to execute a program written by a user based on the application program of the processing unit, a shared memory unit connected to each of the processing unit and the user program unit through a system bus and configured to store data interchanged between the processing unit and the user program unit, and a control unit configured to relay a control signal indicating whether the system bus, by which the data is interchanged between the processing unit and the user program unit, is occupied, and control connection of each of the processing unit and the user program unit with the system bus in response to the control signal.
    Type: Grant
    Filed: August 11, 2009
    Date of Patent: February 26, 2013
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Byung Bog Lee, Myung Nam Bae, Byeong Cheol Choi, In Hwan Lee, Nae Soo Kim
  • Publication number: 20130047167
    Abstract: An efficient mechanism for terminating applications of a data processing system is described herein. In one embodiment, in response to a request for exiting from an operating environment of a data processing system, an operating system examines an operating state associated with an application running within the operating environment, where the operating state is stored at a predetermined memory location shared between the operating system and the application. The operating system immediately terminates the application if the operating state associated with the application indicates that the application is safe for a sudden termination. Otherwise, the operating system defers terminating the application if the operating state associated with the application indicates that the application is unsafe for the sudden termination. Other methods and apparatuses are also described.
    Type: Application
    Filed: October 19, 2012
    Publication date: February 21, 2013
    Applicant: Apple Inc.
    Inventor: Apple Inc.
  • Patent number: 8375398
    Abstract: An electronic system comprises a memory, a parser, and a device driver. A plurality of applications and a document are stored in a user space of the memory, the document storing configuration parameters. The parser module parses the document to retrieve the parameters in response to invocation from at least one application. The device driver creates data structure for the parameters in the kernel space of the memory, thus to facilitate a plurality of programs to execute different functions of the system by commonly utilizing the parameters through the device driver.
    Type: Grant
    Filed: July 22, 2009
    Date of Patent: February 12, 2013
    Assignees: Ambit Microsystems (Shanghai) Ltd., Hon Hai Precision Industry Co., Ltd.
    Inventor: Yao-Hong Du
  • Publication number: 20130036427
    Abstract: Embodiments of the invention relate to message queuing. In one embodiment, a request from an application for retrieving a message from a queue is received. The queue is stored across multiple nodes of a distributed storage system. A preference with respect to message order and message duplication associated with the queue is identified. A message sequence index associated with the queue is sampled based on the preference that has been identified. The message is selected in response to the sampling. The message that has been selected is made unavailable to other applications for a given interval of time, while maintaining the message in the queue. The message is sent to the application.
    Type: Application
    Filed: August 3, 2011
    Publication date: February 7, 2013
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Han CHEN, Minkyong KIM, Hui LEI, Fan YE
  • Patent number: 8370448
    Abstract: A method is described that involves, at a worker node, receiving a notification of a request from a queue. The notification contains a handle for a shared memory connection. The queue is implemented with a second shared memory connection. The method involves receiving the request from the shared memory through the connection. The method also involves generating a response to the request at the worker node and sending the response over the shared memory connection.
    Type: Grant
    Filed: December 28, 2004
    Date of Patent: February 5, 2013
    Assignee: SAP AG
    Inventor: Galin Galchev
  • Patent number: 8370855
    Abstract: A mechanism is provided for managing a process-to-process intra-cluster communication request. A call from a first application is received in a first operating system in a first data processing system. The first operating system passes the call from the first operating system to a first host fabric interface controller in the first data processing system without processing the call. The first host fabric interface controller processes the call without intervention by the first operating system to determine a second data processing system in the plurality of data processing systems with which the call is associated. The first host fabric interface controller initiates an intra-cluster connection to a second host fabric interface controller in the second data processing system. The first host fabric interface controller then transfers the call to the second host fabric interface controller in the second data processing system via the intra-cluster connection.
    Type: Grant
    Filed: December 23, 2008
    Date of Patent: February 5, 2013
    Assignee: International Business Machines Corporation
    Inventors: Ravi K. Arimilli, Piyush Chaudhary
  • Patent number: 8359603
    Abstract: Described are techniques for intermodule communication between a first code module and a second code module each executing in user space. A shared memory portion includes storage for one or more commands and is accessible to the first and the second code modules. A first first-in-first-out (FIFO) structure is used for sending a location in the shared memory portion from the first code module to the second code module. A second FIFO structure is used for sending a location in the shared memory portion from the second code module to the first code module. The first code module stores command data for a command at a first location in the shared memory portion. A command is issued from the first code module to the second code module by sending the first location from the first code module to the second code module using the first FIFO structure.
    Type: Grant
    Filed: March 28, 2008
    Date of Patent: January 22, 2013
    Assignee: EMC Corporation
    Inventors: Peter J. McCann, Christopher M. Gould
  • Publication number: 20130014125
    Abstract: A method for managing multimodal interactions can include the step of registering a multitude of modality components with a modality component server, wherein each modality component handles an interface modality for an application. The modality component can be connected to a device. A user interaction can be conveyed from the device to the modality component for processing. Results from the user interaction can be placed on a shared memory are of the modality component server.
    Type: Application
    Filed: September 14, 2012
    Publication date: January 10, 2013
    Applicant: Nuance Communications, Inc.
    Inventors: Akram Boughannam, Gerald McCobb
  • Patent number: 8347311
    Abstract: An operation method of a mobile application model is provided. An application model is composed that separates applications into individual views and executes the individual views with independent processes. Only a code for a running view is loaded in a memory and a corresponding application is executed in the composed application model when switching to the running view for execution of the corresponding application.
    Type: Grant
    Filed: January 4, 2010
    Date of Patent: January 1, 2013
    Assignee: Samsung Electronics Co., Ltd
    Inventors: Seok-Jae Jeong, Jin-Hee Choi
  • Publication number: 20120331480
    Abstract: In embodiments of a programming interface for data communications, a request queue and a completion queue can be allocated from a user-mode virtual memory buffer that corresponds to an application. The request queue and the completion queue can be pinned to physical memory and then mapped to kernel-mode system addresses so that the request queue and the completion queue can be accessed by a kernel-mode execution thread. A request can be received from an application for the kernel to handle data in the request queue, and a system issued to the kernel for the kernel-mode execution thread to handle the request. The kernel-mode execution thread can then handle additional requests from the application without additional system calls being issued.
    Type: Application
    Filed: June 23, 2011
    Publication date: December 27, 2012
    Applicant: Microsoft Corporation
    Inventors: Osman N. Ertugay, Keith E. Horton, Joseph Nievelt
  • Patent number: 8341644
    Abstract: A a system for relocating a workload partition (WPAR) from a departure logical partition (LPAR) to an arrival LPAR determines an amount of a resource allocated to the relocating WPAR on the departure LPAR and allocates to the relocating WPAR on the arrival LPAR an amount of the resource substantially equal to the amount of the resource allocated to the relocating WPAR on the departure LPAR.
    Type: Grant
    Filed: May 25, 2010
    Date of Patent: December 25, 2012
    Assignee: International Business Machines Corporation
    Inventors: Monica Jean Lemay, Purushotama Padmanabha, Yogesh G. Patgar, Shashidhar Soppin
  • Patent number: 8341643
    Abstract: Shared memory and sockets are used to protect shared resources where multiple operating systems execute concurrently on the same hardware. Rather than using spinlocks for serializing access, when a thread is unable to acquire a shared resource because that resource is already held by another thread, the thread creates a socket with which it will wait to be notified that the shared resource has been released. The sockets may be network sockets or in-memory sockets that are accessible across the multiple operating systems; if sockets are not available, communication technology that provides analogous services between operating systems may be used instead. Optionally, fault tolerance is provided to address socket failures, in which case one or more threads may fall back (at least temporarily) to using spinlocks. A locking service may execute on each operating system to provide a programming interface through which threads can invoke lock operations.
    Type: Grant
    Filed: March 29, 2010
    Date of Patent: December 25, 2012
    Assignee: International Business Machines Corporation
    Inventors: Michael Fulton, Angela Lin, Andrew R. Low, Prashanth K. Nageshappa
  • Publication number: 20120324476
    Abstract: A method of pasting data from a source application to a destination application, where the source and destination applications are not the same; the method comprising the steps of: identifying a data type for the data and an appropriate input handler for the data type; converting the data using the appropriate input handle to a standard format based on the data type; in an output module determining the context of the data in the standard format to identify an appropriate output handler; obtaining a suggested paste operation from a suggestion engine based on the type and context of the data; and instructing a paste operation on the basis of the suggested paste operation.
    Type: Application
    Filed: July 15, 2011
    Publication date: December 20, 2012
    Inventors: Pierre-Jean Reissman, Tadhg Pearson, Jerome Mikaelian, Elona Eski, Guillaume Fournols
  • Patent number: 8336055
    Abstract: A method for determining status of system resources in a computer system includes loading a first operating system into a first memory, wherein the first operating system discovers system resources and reserves a number of the system resources for use of an augmenting operating system, loading the augmenting operating system into a second memory reserved for the augmenting operating system by the first operating system, accessing the first memory from the augmenting operating system and obtaining data, running a process on the augmenting operating system to perform a computation using the data obtained from the first memory, and outputting the results of the computation using the system resources reserved for the augmenting operating system.
    Type: Grant
    Filed: March 12, 2008
    Date of Patent: December 18, 2012
    Assignee: International Business Machines Corporation
    Inventors: Michel Henri Théodore Hack, Stephen John Heisig, Joshua Wilson Knight, III, Gong Su
  • Patent number: RE44210
    Abstract: Super-user privileges are virtualized by designating a virtual super-user for each of a plurality of virtual processes and intercepting system calls for which actual super-user privileges are required, which are nevertheless desirable for a virtual super-user to perform in the context of his or her own virtual process. In one embodiment, a computer operating system includes multiple virtual processes, such as virtual private servers. Each virtual process can be associated with one or more virtual super-users. When an actual process makes a system call that requires actual super-user privileges, the call is intercepted by a system call wrapper.
    Type: Grant
    Filed: May 15, 2009
    Date of Patent: May 7, 2013
    Assignee: Digital Asset Enterprises, L.L.C.
    Inventors: Xun Wilson Huang, Cristian Estan, Jr., Srinivasan Keshav