Patents Examined by Kimbleann C. Verdi
  • Patent number: 10565100
    Abstract: A hardware-based memory management apparatus and method is provided. The apparatus includes a memory allocation module, a memory reclamation module, and a memory compaction module, based on hardware to accelerate a memory manger of an operating system. The method manages memory using the memory allocation module, memory reclamation module, and memory compaction module.
    Type: Grant
    Filed: April 8, 2015
    Date of Patent: February 18, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Seyoun Lim, Kwonsik Kim, Kibeom Kim, Hyojeong Lee, Myungsun Kim
  • Patent number: 10459840
    Abstract: Exemplary embodiments provide for compressing, storing, retrieving and decompressing paged code from mass storage devices. By evaluating the size of compressed virtual pages relative to the storage page (read unit) of the mass storage device into which the compressed virtual pages are to be stored, decisions can be made which facilitate later read out and decompression of those compressed virtual pages. According to exemplary embodiments, a virtual page can be stored uncompressed, compressed but undivided or compressed and subdivided into a plurality of parts based on an evaluation.
    Type: Grant
    Filed: October 12, 2011
    Date of Patent: October 29, 2019
    Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (publ)
    Inventors: Vijaya Kumar Kilari, Saugata Das Purkayastha
  • Patent number: 10452292
    Abstract: In a scale-out type storage in which multiple physical storage systems are provided collectively as a single virtual storage system, a logical path is established between the host computer and the virtual storage system so that input/output performance of the storage is not deteriorated, wherein during allocation of a volume to the virtual storage system, if a logical control unit (logical CU) establishing a logical path to a volume is unallocated, a logical CU and a volume is generated to a storage system having either a small number of allocated logical CUs or a small amount of used storage capacity. On the other hand, if there is a storage system having a logical CU already allocated thereto, a volume is generated in that storage system.
    Type: Grant
    Filed: March 7, 2014
    Date of Patent: October 22, 2019
    Assignee: Hitachi, Ltd.
    Inventors: Naoko Ikegaya, Nobuhiro Maki, Akira Yamamoto
  • Patent number: 10318360
    Abstract: A first feature (e.g., chart or table) includes a reference to a dynamic pointer. Independently, the pointer is defined to point to a second feature (e.g., a query). The first feature is automatically updated to reflect a current value of the second feature. The reference to the pointer and pointer definition are recorded in a central registry, and changes to the pointer or second feature automatically cause the first feature to be updated to reflect the change. A mapping between features can be generated using the registry and can identify interrelationships to a developer. Further, changes in the registry can be tracked, such that a developer can view changes pertaining to a particular time period and/or feature of interest (e.g., corresponding to an operation problem).
    Type: Grant
    Filed: October 31, 2017
    Date of Patent: June 11, 2019
    Assignee: SPLUNK INC.
    Inventor: Itay A. Neeman
  • Patent number: 10255104
    Abstract: Embodiments described herein include a system, a computer-readable medium and a computer-implemented method for processing a system call (SYSCALL) request. The SYSCALL request from an invisible processing device is stored in a queueing mechanism that is accessible to a visible processing device, where the visible processing device is visible to an operating system and the invisible processing device is invisible to the operating system. The SYSCALL request is processed using the visible processing device, and the invisible processing device is notified using a notification mechanism that the SYSCALL request was processed.
    Type: Grant
    Filed: March 29, 2013
    Date of Patent: April 9, 2019
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Benjamin Thomas Sander, Michael Clair Houston, Keith Lowery, Newton Cheung
  • Patent number: 10235221
    Abstract: The disclosed embodiments relate to a system that facilitates developing applications in a component-based software development environment. This system provides an execution environment comprising instances of application components and a registry that maps names to instances of application components. Upon receiving a call to register a mapping between a name and an instance of an application component, the system updates the registry to include an entry for the mapping. Moreover, upon receiving a call to be notified about registry changes for a name, the system updates the registry to send a notification to a caller when a registry change occurs for the name.
    Type: Grant
    Filed: July 5, 2018
    Date of Patent: March 19, 2019
    Assignee: SPLUNK INC.
    Inventor: Itay A. Neeman
  • Patent number: 10235220
    Abstract: A system, method, and computer program product are provided for improving resource utilization of multithreaded applications. Rather than requiring threads to block while waiting for data from a channel or requiring context switching to minimize blocking, the techniques disclosed herein provide an event-driven approach to launch kernels only when needed to perform operations on channel data, and then terminate in order to free resources. These operations are handled efficiently in hardware, but are flexible enough to be implemented in all manner of programming models.
    Type: Grant
    Filed: September 7, 2012
    Date of Patent: March 19, 2019
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Lee W. Howes, Benedict R. Gaster, Michael Clair Houston, Michael Mantor
  • Patent number: 10120577
    Abstract: The present application provides an improved approach for managing performance tier de-duplication in a virtualization environment. A content cache is implemented on high performance tiers of storage in order to maintain a working set for the user virtual machines accessing the system, and associates fingerprints with data stored therein. During write requests from the user virtual machines, fingerprints are calculated for the data to be written. However, no de-duplication is performed during the write. During read requests, fingerprints corresponding to the data to be read are retrieved and matched with the fingerprints associated with the data in the content cache. Thus, while multiple pieces of data having the same fingerprints may be written to the lower performance tiers of storage, only one of those pieces of data having that fingerprint will be stored in the content cache for fulfilling read requests.
    Type: Grant
    Filed: May 13, 2016
    Date of Patent: November 6, 2018
    Inventors: Kannan Muthukkaruppan, Karthik Ranganathan
  • Patent number: 10061626
    Abstract: The disclosed embodiments relate to a system that facilitates developing applications in a component-based software development environment. This system provides an execution environment comprising instances of application components and a registry that maps names to instances of application components. Upon receiving a call to register a mapping between a name and an instance of an application component, the system updates the registry to include an entry for the mapping. Moreover, upon receiving a call to be notified about registry changes for a name, the system updates the registry to send a notification to a caller when a registry change occurs for the name.
    Type: Grant
    Filed: June 16, 2014
    Date of Patent: August 28, 2018
    Assignee: Splunk Inc.
    Inventor: Itay A. Neeman
  • Patent number: 8429675
    Abstract: Two or more virtual machines may be co-located on a same physical machine, and the virtual machines may communicate with each other. To establish efficient communication, memory mapping information for respective virtual machines can be exchanged between the respective virtual machines. An instance of a virtualized network interface can be established, and a direct communications channel can be mapped between respective virtualized network interfaces. Data packet routing information can be updated, such that data packets transferred between two of more co-located virtual machines can be transferred using the virtualized network interface communications channel.
    Type: Grant
    Filed: June 13, 2008
    Date of Patent: April 23, 2013
    Assignee: NetApp, Inc.
    Inventors: Prashanth Radhakrishnan, Kiran Nenmeli Srinivasan
  • Patent number: 8347312
    Abstract: The invention relates to a device comprising a processor, the processor comprising: an execution unit for executing multiple threads, each thread comprising a sequence of instructions; and a plurality of sets of thread registers, each set arranged to store information relating to a respective one of the plurality of threads. The processor also comprises circuitry for establishing channels between thread register sets, the circuitry comprising a plurality of channel terminals and being operable to establish a channel between one of the thread register sets and another thread register set via one of the channel terminals and another channel terminal. Each channel terminal comprises at least one buffer operable to buffer data transferred over a thus established channel and a channel terminal identifier register operable to store an identifier of the other channel terminal via which that channel is established.
    Type: Grant
    Filed: July 6, 2007
    Date of Patent: January 1, 2013
    Assignee: XMOS Limited
    Inventors: Michael David May, Peter Hedinger, Alastair Dixon
  • Patent number: 8151282
    Abstract: The present invention is directed towards systems and methods for decomposing a complex problem or task into one or more constituent components, operating in parallel over a plurality of computing devices in communication over a network. A system according to the present invention comprises one or more pipelets. A given pipelet comprises a read data interface operative to receive incoming data, one or more functions for processing the incoming data, and a write data interface operative to make the processed incoming data available as output data to be further processed. The system according to the present embodiment further comprises a controller operative to receive a pipeline specification that identifies the one or more pipelets as belonging to a pipeline, generate a dependency map that identifies an order in which to execute the one or more pipelets and execute the pipelets according to the dependency map to generate a result.
    Type: Grant
    Filed: April 17, 2006
    Date of Patent: April 3, 2012
    Assignee: Yahoo! Inc.
    Inventor: John M. Carnahan