Patents Examined by Christian P. Chace
  • Patent number: 10043137
    Abstract: A congestion control mechanism is described that is specifically designed to enhance the operation of TCP communication sessions for the delivery of web content. The congestion control mechanism dynamically adjusts the size of the congestion window in a manner that maximizes the speed of content delivery for web page requests in a cellular network. The dynamic window size adjustments, including the initial congestion control window size, are adaptive, changing as cellular network conditions change, and in a manner that is not possible with conventional TCP congestion control mechanisms that were not explicitly designed to accelerate content in cellular networks. The congestion control mechanism also learns from previous experience with a particular end user device address and network, and applies its learning to set its initial values and subsequent behavior to more optimal levels for the particular end user device and network.
    Type: Grant
    Filed: January 5, 2015
    Date of Patent: August 7, 2018
    Assignee: NUU:BIT, INC.
    Inventors: Jacob W. Jorgensen, Thomas Garett Kavanagh, Jagadishchandra Sarnaik, Akhil Shashidhar, Sreenivasa R. Tellakula, Jonathan Bosanac
  • Patent number: 9953270
    Abstract: Optimization of machine intelligence utilizes a systemic process through a plurality of computer architecture manipulation techniques that take unique advantage of efficiencies therein to minimize clock cycles and memory usage. The present invention is an application of machine intelligence which overcomes speed and memory issues in learning ensembles of decision trees in a single-machine environment. Such an application of machine intelligence includes inlining relevant statements by integrating function code into a caller's code, ensuring a contiguous buffering arrangement for necessary information to be compiled, and defining and enforcing type constraints on programming interfaces that access and manipulate machine learning data sets.
    Type: Grant
    Filed: May 7, 2014
    Date of Patent: April 24, 2018
    Assignee: WISE IO, INC.
    Inventor: Damian Ryan Eads
  • Patent number: 9934464
    Abstract: A data processing system processes data sets (such as low-resolution transaction data) into high-resolution data sets by mapping generic information into attribute-based specific information that may be processed to identify frequent sets therein. When association rules are generated from such frequent sets, the complexity and/or quantity of such rules may be managed by removing redundancies from the rules, such as by removing rules providing only trivial associations, removing rules having only a part group as the consequent, modifying rules to remove redundant antecedent items and/or filtering subsumed rules from the generated rule set that do not provide sufficient lift to meet an adjustable specialization lift threshold requirement.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: April 3, 2018
    Assignee: VERSATA DEVELOPMENT GROUP, INC.
    Inventor: David Franke
  • Patent number: 9934364
    Abstract: The present disclosure provides methods for applying artificial neural networks to flow cytometry data generated from biological samples to diagnose and characterize cancer in a subject.
    Type: Grant
    Filed: February 28, 2017
    Date of Patent: April 3, 2018
    Assignee: ANIXA DIAGNOSTICS CORPORATION
    Inventors: Amit Kumar, John Roop, Anthony J. Campisi
  • Patent number: 9846836
    Abstract: An “Interestingness Modeler” uses deep neural networks to learn deep semantic models (DSM) of “interestingness.” The DSM, consisting of two branches of deep neural networks or their convolutional versions, identifies and predicts target documents that would interest users reading source documents. The learned model observes, identifies, and detects naturally occurring signals of interestingness in click transitions between source and target documents derived from web browser logs. Interestingness is modeled with deep neural networks that map source-target document pairs to feature vectors in a latent space, trained on document transitions in view of a “context” and optional “focus” of source and target documents. Network parameters are learned to minimize distances between source documents and their corresponding “interesting” targets in that space.
    Type: Grant
    Filed: June 13, 2014
    Date of Patent: December 19, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jianfeng Gao, Li Deng, Michael Gamon, Xiaodong He, Patrick Pantel
  • Patent number: 9583130
    Abstract: According to the disclosure, a unique and novel archiving system that allows the digital shredding of archived data is disclosed. Embodiments of the archiving system include removable disk drives that store data, which may be erased such that the data is considered destroyed but that allows the removable disk drive to be reused. The archiving system can determine which data should be erased. Then, the data is digitally shredded such that the removed data cannot be retrieved or deciphered. In alternative embodiments, a protection may be placed on the data required to be kept because the data is associated with a legal suit. This “legal hold” prevents the data from being digitally shredded. As such, the archiving system can provide a system that can dispose of data on a file-by-file or granular level without physically destroying the media upon which the data is stored.
    Type: Grant
    Filed: August 27, 2008
    Date of Patent: February 28, 2017
    Assignee: Imation Corp.
    Inventors: Matthew D. Bondurant, S. Christopher Alaimo
  • Patent number: 9575761
    Abstract: A semiconductor device includes a memory for storing a plurality of instructions therein, an instruction queue which temporarily stores the instructions fetched from the memory therein, a central processing unit which executes the instruction supplied from the instruction queue, an instruction cache which stores therein the instructions executed in the past by the central processing unit, and a control circuit which controls fetching of each instruction. When the central processing unit executes a branch instruction, and an instruction of a branch destination is being in the instruction cache and an instruction following the instruction of the branch destination is stored in the instruction queue, the control circuit causes the instruction queue to fetch the instruction of the branch destination from the instruction cache and causes the instruction queue not to fetch the instruction following the instruction of the branch destination.
    Type: Grant
    Filed: March 24, 2014
    Date of Patent: February 21, 2017
    Assignee: Renesas Electronics Corporation
    Inventor: Isao Kotera
  • Patent number: 9569117
    Abstract: According to one embodiment, a controller executes a first process such that writing is performed in an order of page numbers in the memory chip. The first process includes a second process to be executed in an order of group units. The second process includes a process of writing data to the lower pages of the memory chips belonging to the banks in one group, and subsequently writing data to the upper pages of the memory chips belonging to the banks in the group.
    Type: Grant
    Filed: July 31, 2014
    Date of Patent: February 14, 2017
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Yoshihisa Kojima, Katsuhiko Ueki
  • Patent number: 9569369
    Abstract: Techniques are provided for performing OID-to-VMA translations during runtime. Vector registers are used to implement a “software TLB” to perform OID-to-VMA translations. Runtime dereferencing is performed using one or more vector registers to compare each OID that needs to be dereferenced against a set of cached OIDs. When a cached OID matches the OID being dereferenced, the VMA of the cached OID is retrieved from cache. Buffer cache items may be pinned during the period in which the software TLB stores entries for the items. The cache of OID translation information may be single or multi-leveled, and may be partially or completely stored in registers within a processor. When stored in registers, the translation information may be spilled out of the register, and reloaded into the register, as the register is needed for other purposes.
    Type: Grant
    Filed: October 27, 2011
    Date of Patent: February 14, 2017
    Assignee: Oracle International Corporation
    Inventors: Eric Sedlar, Aman Naimat
  • Patent number: 9569360
    Abstract: Technology is provided for partitioning a shared unified cache in a multi-processor computer system. The technology can receive a request to allocate a portion of a shared unified cache memory for storing only executable instructions, partition the cache memory into multiple partitions, and allocate one of the partitions for storing only executable instructions. The technology can further determine the size of the portion of the cache memory to be allocated for storing only executable instructions as a function of the size of the multi-processor's L1 instruction cache and the number of cores in the multi-processor.
    Type: Grant
    Filed: September 27, 2013
    Date of Patent: February 14, 2017
    Assignee: Facebook, Inc.
    Inventors: Narsing Vijayrao, Keith Adams
  • Patent number: 9569115
    Abstract: An application located in one or more first memory regions is executed. The application has a separate modified portion, which is located in one or more second memory regions. A request is obtained to access one of a first memory region or a second memory region, the request including an address of a first type. Based on obtaining the request, the address is translated to another address. The other address is of a second type and indicates the first memory region or the second memory region. The translating is based on an attribute associated with the address, in which the attribute is used to select information from a plurality of information concurrently available for selection. The plurality of information provide multiple addresses of the second type, one of which is the other address. The other address is used to access the first memory region or the second memory region.
    Type: Grant
    Filed: March 31, 2014
    Date of Patent: February 14, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Michael K. Gschwind
  • Patent number: 9552305
    Abstract: A method begins by a processing module identifying a first storage space zone that includes a plurality of deleted encoded data slices and a plurality of active encoded data slices. The method continues with the processing module determining to compact the first storage space zone based on a function of the plurality of deleted encoded data slices and the plurality of active encoded data slices. The method continues with the processing module retrieving the plurality of active encoded data slices from the first storage space zone, identifying a second storage space zone, storing the plurality of active encoded data slices in the second storage space zone, and erasing the plurality of deleted encoded data slices and the plurality of active encoded data slices from the first storage space zone when the first storage space zone is to be compacted.
    Type: Grant
    Filed: October 11, 2011
    Date of Patent: January 24, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ilya Volvovski, Jason K. Resch, Andrew Baptist, Greg Dhuse
  • Patent number: 9542352
    Abstract: A memory circuit system and method are provided. An interface circuit is capable of communication with a plurality of memory circuits and a system. In use, the interface circuit is operable to interface the memory circuits and the system for reducing command scheduling constraints of the memory circuits.
    Type: Grant
    Filed: February 8, 2007
    Date of Patent: January 10, 2017
    Assignee: Google Inc.
    Inventors: Suresh Natarajan Rajan, Keith R. Schakel, Michael John Sebastian Smith, David T. Wang, Frederick Daniel Weber
  • Patent number: 9542353
    Abstract: A memory circuit system and method are provided. An interface circuit is capable of communication with a plurality of memory circuits and a system. In use, the interface circuit is operable to interface the memory circuits and the system for reducing command scheduling constraints of the memory circuits.
    Type: Grant
    Filed: October 30, 2007
    Date of Patent: January 10, 2017
    Assignee: Google Inc.
    Inventors: Suresh Natarajan Rajan, Keith R. Schakel, Michael John Sebastian Smith, David T. Wang, Frederick Daniel Weber
  • Patent number: 9529719
    Abstract: Apparatus and method embodiments for dynamically allocating cache space in a multi-threaded execution environment are disclosed. In some embodiments, a processor includes a cache shared by each of a plurality of processor cores and/or each of a plurality of threads executing on the processor. The processor further includes a cache allocation circuit configured to dynamically allocate space in the cache provided to each of the plurality of processor cores based on their respective usage patterns. The cache allocation unit may track cache usage by each of the processor cores/threads using subsets of usage bits and counters configured to update states of the usage bits. The cache allocation circuit may track the usage of cache space by the processor cores/threads and may allocate more space to those that exhibit more usage of the cache.
    Type: Grant
    Filed: August 5, 2012
    Date of Patent: December 27, 2016
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventor: William L. Walker
  • Patent number: 9524219
    Abstract: Durable atomic transactions for non-volatile media are described. A processor includes an interface to a non-volatile storage medium and a functional unit to perform instructions associated with an atomic transaction. The instructions are to update data at a set of addresses in the non-volatile storage medium atomically. The functional unit is operable to perform a first instruction to create the atomic transaction that declares a size of the data to be updated atomically. The functional unit is also operable to perform a second instruction to start execution of the atomic transaction. The functional unit is further operable to perform a third instruction to commit the atomic transaction to the set of addresses in the non-volatile storage medium, wherein the updated data is not visible to other functional units of the processing device until the atomic transaction is complete.
    Type: Grant
    Filed: September 27, 2013
    Date of Patent: December 20, 2016
    Assignee: Intel Corporation
    Inventors: Robert Bahnsen, Sridharan Sakthivelu, Vikram A. Saletore, Krishnaswamy Viswanathan, Matthew E. Tolentino, Kanivenahalli Govindaraju, Vincent J. Zimmer
  • Patent number: 9514141
    Abstract: A memory device and method for content virtualization are disclosed. In one embodiment, a plurality of directories are created in the memory of the memory device, wherein each of the plurality of directories points to a same storage location of the digital content. In another embodiment, a first header for the digital content is stored in each of the different directories, wherein the first header comprises information about where to find the digital content in the memory. In yet another embodiment, the memory device comprises circuitry that receives an identification of a host device in communication with the memory device and reorganizes a directory structure of the memory in accordance with the identification of the host device, wherein the reorganization results in the digital content appearing to be located in a directory expected by the host device.
    Type: Grant
    Filed: December 28, 2007
    Date of Patent: December 6, 2016
    Assignee: SanDisk Technologies LLC
    Inventors: Fabrice E. Jogand-Coulomb, Robert Chin-Tse Chang
  • Patent number: 9514142
    Abstract: A memory device and method for content virtualization are disclosed. In one embodiment, a plurality of directories are created in the memory of the memory device, wherein each of the plurality of directories points to a same storage location of the digital content. In another embodiment, a first header for the digital content is stored in each of the different directories, wherein the first header comprises information about where to find the digital content in the memory. In yet another embodiment, the memory device comprises circuitry that receives an identification of a host device in communication with the memory device and reorganizes a directory structure of the memory in accordance with the identification of the host device, wherein the reorganization results in the digital content appearing to be located in a directory expected by the host device.
    Type: Grant
    Filed: May 11, 2010
    Date of Patent: December 6, 2016
    Assignee: SanDisk Technologies LLC
    Inventors: Fabrice E. Jogand-Coulomb, Robert Chin-Tse Chang
  • Patent number: 9489310
    Abstract: A system, method, and computer-readable medium that facilitate efficient use of cache memory in a massively parallel processing system are provided. A residency time of a data block to be stored in cache memory or a disk drive is estimated. A metric is calculated for the data block as a function of the residency time. The metric may further be calculated as a function of the data block size. One or more data blocks stored in cache memory are evaluated by comparing a respective metric of the one or more data blocks with the metric of the data block to be stored. A determination is then made to either store the data block on the disk drive or flush the one or more data blocks from the cache memory and store the data block in the cache memory. In this manner, the cache memory may be more efficiently utilized by storing smaller data blocks with lesser residency times by flushing larger data blocks with significant residency times from the cache memory.
    Type: Grant
    Filed: November 8, 2013
    Date of Patent: November 8, 2016
    Assignee: Teradata US, Inc.
    Inventors: Douglas Brown, John Mark Morris
  • Patent number: 9448745
    Abstract: A method of writing host data to a storage device including a central processing unit (CPU), a self-organized fast release buffer (FRB), and a non-volatile memory, the storage device being in communication with a host, the method including receiving a command from the CPU to write the host data to a location in the non-volatile memory, the host data being associated with a first plurality of codewords (CWs), allocating space in a buffer memory of the FRB for storage of the first CWs, storing the first CWs into the allocated space in the buffer memory, extracting data from the stored first CWs, organizing the extracted data and the host data into a second plurality of CWs, transferring a second CWs to a physical addresses in the non-volatile memory, and sending the plurality of physical addresses to the CPU to update a logical-to-physical table.
    Type: Grant
    Filed: March 13, 2014
    Date of Patent: September 20, 2016
    Assignee: NXGN Data, Inc.
    Inventors: Joao Alcantara, Vladimir Alves