Patents Issued in December 6, 2018
-
Publication number: 20180349282Abstract: In an embedding caching system, embeddings generated from previous problems are re-used to improve performance on future problems. A data structure stores problems and their corresponding embeddings. When computing future embeddings, this data structure can be queried to determine whether an embedding has already been computed for a problem with the same structure. If it has, the embedding can be retrieved from the data structure, saving the time and computational expense of generating a new embedding. In one variation, the query is not based on exact matches. If a new problem is similar in structure to previous problems, those embeddings may be used to accelerate the generating of an embedding for the new problem, even if they cannot be used directly to embed the new problem.Type: ApplicationFiled: July 20, 2018Publication date: December 6, 2018Inventors: James W. Brahm, David A. B. Hyde, Peter McMahon
-
Publication number: 20180349283Abstract: A system is described for playing embedded video on the Web inside the virtual desktop. A video element, such as an HTML5 video element, in a webpage accessed through a browser in the virtual desktop can be detected and video content for the video element can be intercepted before it is decoded in the virtual desktop. The encoded video data can be transmitted to the client device. On the client device, a counterpart video rendering application can receive the transmitted video data, decode it, and render it in a window that is overlaid onto a corresponding area of the virtual desktop graphical user interface (GUI) in a client application. Headless video composition can be implemented for rendering the video on the client, giving the illusion of the video playing inside the virtual desktop, while it is actually playing on the client itself.Type: ApplicationFiled: July 20, 2017Publication date: December 6, 2018Inventors: Lavesh Bhatia, Shixi Qiu
-
Publication number: 20180349284Abstract: The lookup of accesses (including snoops) to cache tag ways is serialized to perform one (or less than all) tag way access per clock (or even slower). Thus, for a N-way set associative cache, instead of performing lookup/comparison on the N tag ways in parallel, the lookups are performed one tag way a time. Way prediction is utilized to select an order to look in the N ways. This can include selecting which tag way will be looked in first. This helps to reduce the average number of cycles and lookups required.Type: ApplicationFiled: May 30, 2017Publication date: December 6, 2018Inventors: Patrick P. LAI, Robert Allen SHEARER
-
Publication number: 20180349285Abstract: Apparatus and method for managing namespaces in a Non-Volatile Memory Express (NVMe) controller environment. A non-volatile memory (NVM) is arranged to store map units (MUs) as addressable data blocks in one or more namespaces. A forward map has a sequence of map unit address (MUA) entries that correlate each of the MUs with the physical locations in the NVM. The MUA entries are grouped into immediately adjacent, contiguous ranges for each of the namespaces. A base MUA array identifies the address, within the forward map, of the beginning MUA entry for each namespace. A new namespace may be added by appending a new range of the MUA entries to the forward map immediate following the last MUA entry, and by adding a new entry to the base MUA array to identify the address, within the forward map, of the beginning MUA entry for the new namespace.Type: ApplicationFiled: May 31, 2017Publication date: December 6, 2018Inventors: Mark Ish, Steven S. Williams, Jeffrey Munsil
-
Publication number: 20180349286Abstract: Techniques for managing page tables for an accelerated processing device are provided. The page tables for the accelerated processing device include a primary page table and secondary page tables. The page size selected for any particular secondary page table is dependent on characteristics of the memory allocations for which translations are stored in the secondary page table. Any particular memory allocation is associated with a particular “initial” page size. Translations for multiple allocations may be placed into a single secondary page table, and a particular page size is chosen for all such translations. The page size is the smallest of the natural page sizes for the allocations that are not using a translate further technique. The translation further technique is a technique wherein secondary page table entries do not themselves provide translations but instead point to an additional page table level referred to as the translate further page table level.Type: ApplicationFiled: May 30, 2017Publication date: December 6, 2018Applicant: ATI Technologies ULCInventor: Dhirendra Partap Singh Rana
-
Publication number: 20180349287Abstract: A persistent storage device, such as a solid state drive, repurposes translation table memory, such as RAM integrated in a SSD controller that stores an FTL table, to pre-fetch and cache data associated with selected logical addresses, such as LBAs that are historically referenced at a higher rate. Repurposed FTL table memory to serve as a cache for frequently used persistent information improves storage device response time.Type: ApplicationFiled: June 2, 2017Publication date: December 6, 2018Applicant: Dell Products L.P.Inventor: Lip Vui Kan
-
Publication number: 20180349288Abstract: Embodiments of apparatuses, methods, and systems for input/output translation lookaside buffer (IOTLB) prefetching are described. In an embodiment, an apparatus includes a bridge, an input/output memory management unit (IOMMU), and an IOTLB prefetch unit. The bridge is between an input/output (I/O) side of a system and a memory side of the system. The I/O side is to include an interconnect on which a zero-length transaction is to be initiated by an I/O device. The zero-length transaction is to include an I/O-side memory address. The IOMMU includes address translation hardware and an IOTLB. The address translation hardware is to generate a translation of the I/O-side memory address to a memory-side memory address. The translation is to be stored in the IOTLB. The IOTLB prefetch control unit includes prefetch control logic to cause the apparatus to, in response to determining that the memory-side address is inaccessible, emulate completion of the zero-length transaction.Type: ApplicationFiled: May 30, 2017Publication date: December 6, 2018Inventors: Rupin H. Vakharwala, Eric A. Gouldey, Camron B. Rust, Brett Ireland, Rajesh M. Sankaran
-
Publication number: 20180349289Abstract: System and method for managing migration of global variables on processing system during live program updates, including creating a shared data segment is created in a physical memory of the processing system, binding a logical address space of a first global variable data segment for a first version of a program to a physical address of the shared data segment, and binding a logical address space for a second global variable data segment for an update version of the program to the physical address of the shared data segment. The first global variable data segment and the second global variable data segment exist concurrently and each map to common global variables stored in the shared data segment.Type: ApplicationFiled: June 2, 2017Publication date: December 6, 2018Inventors: Kai-Ting Amy Wang, Peng Wu
-
Publication number: 20180349290Abstract: An electronic device includes a first memory subsystem, a second memory subsystem and a direct memory access controller. In response to a first type of request from a processor, the direct memory access controller requests data from the first memory subsystem and provides the data to the second memory subsystem. In response to a second type of request from a processor, the direct memory access controller requests an uncompressed matrix from the first memory subsystem, compresses the uncompressed matrix to generate a compressed matrix, and provides the compressed matrix to the second memory subsystem. In response to a third type of request from a processor, the direct memory access controller requests a compressed matrix from the second memory subsystem, un-compresses the compressed matric to generate an uncompressed matrix, and provides the un-compressed matrix to the first memory subsystem.Type: ApplicationFiled: May 31, 2017Publication date: December 6, 2018Inventors: Michael Andreas Staudenmaier, Leonardo Surico, Maik Brett
-
Publication number: 20180349291Abstract: Systems, apparatuses, and methods for efficiently allocating data in a cache are described. In various embodiments, a processor decodes an indication in a software application identifying a temporal data set. The data set is flagged with a data set identifier (DSID) indicating temporal data to drop after consumption. When the data set is allocated in a cache, the data set is stored with a non-replaceable attribute to prevent a cache replacement policy from evicting the data set before it is dropped. A drop command with an indication of the DSID of the data set is later issued after the data set is read (consumed). A copy of the data set is not written back to the lower-level memory although the data set is removed from the cache. An interrupt is generated to notify firmware or other software of the completion of the drop command.Type: ApplicationFiled: May 31, 2017Publication date: December 6, 2018Inventors: Wolfgang H. Klingauf, Kenneth C. Dyke, Karthik Ramani, Winnie W. Yeung, Anthony P. DeLaurier, Luc R. Semeria, David A. Gotwalt, Srinivasa Rangan Sridharan, Muditha Kanchana
-
Publication number: 20180349292Abstract: A computing system comprises one or more cores. Each core comprises a processor and switch with each processor coupled to a communication network among the cores. Also disclosed are techniques for implementing an adaptive last level allocation policy in a last level cache in a multicore system receiving one or more new blocks for allocating for storage in the cache, accessing a selected access profile from plural access profiles that define allocation actions, according to a least recently used type of allocation and based on a cache action, a state bit, and traffic pattern type for the new blocks of data and handling the new block according to the selected access profile for a selected least recently used (LRU) position in the cache.Type: ApplicationFiled: June 1, 2017Publication date: December 6, 2018Inventors: Gilad Tal, Gil Moran, Miriam Menes, Gil Kopilov, Shlomo Raikin
-
Publication number: 20180349293Abstract: A controller for a data storage device is disclosed. The controller includes an encryptor and electronic fuses. The electronic fuses is provided for storage of a key which is supposed to be used by the encryptor to encrypt user data before storing the user data in the data storage device. When a user deletes the user data, the controller changes at least one bit of the key stored in the electronic fuses from ‘0’ to ‘1’. Due to the change of the key stored in the electronic fuses for the encryptor, the deleted user data is fully prevented from leaking from the data storage device. A data storage device with a high confidential level is achieved.Type: ApplicationFiled: January 11, 2018Publication date: December 6, 2018Inventor: Sheng-Liu LIN
-
Publication number: 20180349294Abstract: An apparatus and method are provided for managing bounded pointers. The apparatus has processing circuitry to execute a sequence of instructions, and a plurality of storage elements accessible to the processing circuitry, for storage of bounded pointers and non-bounded pointers. Each bounded pointer has explicit range information associated therewith indicative of an allowable range of memory addresses when using the bounded pointer. A current range check storage element is then used to store a current range check state for the processing circuitry. When the current range check state indicates a default state, the processing circuitry is responsive to execution of a memory access instruction identifying a pointer to be used to identify a memory address, to perform a range check operation to determine whether access to that memory address is permitted.Type: ApplicationFiled: October 19, 2016Publication date: December 6, 2018Inventor: Graeme Peter BARNES
-
Publication number: 20180349295Abstract: A data processor includes an access target with the address assigned to a memory space, an access subject that gains access to the access target while specifying address, identifier, and access type, and a memory protection resource including an associative memory to perform an access control. The memory protection resource includes a plurality of entries, each including a region setting unit, an identifier determination information unit, and an attribute setting unit. When the address specified by the access subject at the access is included in the region set in the region setting unit in the entry, the identifier agrees with at least one of the identifiers specified according to the identifier determination information, and the specified access type agrees with the access type set in the attribute setting unit, the memory protection resource permits the access.Type: ApplicationFiled: August 8, 2018Publication date: December 6, 2018Inventors: Koji ADACHI, Yoichi YUYAMA
-
Publication number: 20180349296Abstract: Interface circuitry is provided for a host device, the interface circuitry for controlling data connections between the host device and a peripheral device.Type: ApplicationFiled: May 24, 2018Publication date: December 6, 2018Applicant: Cirrus Logic International Semiconductor Ltd.Inventors: Robert David RAND, Bradley Allan LAMBERT
-
Publication number: 20180349297Abstract: One embodiment provides an apparatus comprising a first processor to execute a function driver for a peripheral having a first bus interface and virtualized host controller interface logic to provide a protocol interface associated with the first bus interface to the function driver to enable the function driver to control a set of peripherals connected via at least a second bus interface, the second bus interface different from the first bus interface.Type: ApplicationFiled: September 30, 2017Publication date: December 6, 2018Inventors: Daniel B. Wilson, Scott M. Deandrea, Roberto G. Yepez
-
Publication number: 20180349298Abstract: There is provided an information-sharing device including, in a second device connected to a first device, an information obtaining unit which obtains, through a communication unit of the second device, first application information indicating an application possessed by the first device, a shared information generating unit which generates shared information shared by the first device and the second device, based on the first application information obtained by the information obtaining unit, and a transmission control unit which transmits the shared information through the communication unit to the first device.Type: ApplicationFiled: July 27, 2018Publication date: December 6, 2018Applicant: Sony CorporationInventors: Takashi Onohara, Roka Ueda, Keishi Daini, Taichi Yoshio, Yuji Kawabe, Seizi Iwayagano, Takuma Higo, Eri Sakai
-
Publication number: 20180349299Abstract: A system for managing one or more queues in a multi-processor environment includes a memory configured to be accessed by a plurality of processing elements, and a queue manager disposed in communication with a plurality of processors and with the memory, the queue manager configured to control a queue in the memory, the queue including a plurality of queue elements, the queue manager configured to intercept a message from a processing element of the plurality of processing elements and perform one or more queuing operations on the queue based on the message. The system also includes a dynamically configurable queue full value maintained by the queue manager, the queue full value being a threshold value that specifies a maximum number of the queue elements that can be written to before a queue full condition is detected, the maximum number based on a number of processing elements.Type: ApplicationFiled: May 31, 2017Publication date: December 6, 2018Inventors: Clinton E. Bubb, Michael Grassi, Howard M. Haynie, Raymond M. Higgs, Kirk Pospesel
-
Publication number: 20180349300Abstract: A system for managing one or more queues in a multi-processor environment includes a memory configured to be accessed by a plurality of processing elements, and a queue manager disposed in communication with a plurality of processors and with the memory, the queue manager configured to control a queue in the memory, the queue including a plurality of queue elements, the queue manager configured to intercept a message from a processing element of the plurality of processing elements and perform one or more queuing operations on the queue based on the message. The system also includes a dynamically configurable queue full value maintained by the queue manager, the queue full value being a threshold value that specifies a maximum number of the queue elements that can be written to before a queue full condition is detected, the maximum number based on a number of processing elements.Type: ApplicationFiled: November 2, 2017Publication date: December 6, 2018Inventors: Clinton E. Bubb, Michael Grassi, Howard M. Haynie, Raymond M. Higgs, Kirk Pospesel
-
Publication number: 20180349301Abstract: Method and apparatus for managing a non-volatile memory (NVM). In some embodiments, a memory module has a memory module electronics (MME) circuit configured to program data to and read data from solid-state non-volatile memory cells of the NVM. A controller is adapted to communicate commands and data to the MME circuit via an intervening data bus. The controller operates to reset the MME circuit by issuing a reset command to the MME circuit over the data bus, activating a decoupling circuit coupled between the data bus and a reference line at a reference voltage level to remove capacitance from the data bus resulting from the reset command, and subsequently sensing a voltage on the data bus. In some cases, multiple MME circuits and NVMs may be arranged on a plurality of flash dies which are concurrently reset by the controller.Type: ApplicationFiled: June 1, 2017Publication date: December 6, 2018Inventor: Timothy Canepa
-
Publication number: 20180349302Abstract: Apparatuses and methods for variable latency memory operations are disclosed herein. An example apparatus may include a memory configured to provide first information during a variable latency period indicating the memory is not available to perform a command, wherein the first information is indicative of a remaining length of the variable latency period, the remaining length is one of a relatively short, normal, or long period of time, the memory configured to provide second information in response to receiving the command after the latency period.Type: ApplicationFiled: August 8, 2018Publication date: December 6, 2018Applicant: Micron Technology, Inc.Inventors: Graziano Mirichigni, Daniele Balluchi, Luca Porzio
-
Publication number: 20180349303Abstract: Provided is an input-output system. An input-output system 100 includes an input-output operation part 110, and an input-output model having a character representation space constructed by a large-scale data set. When an input data and a character representation vector are input into the input-output operation part 110, an output data corresponding to the input data reflecting the characteristics corresponding to the character representation vector is output. Further, the input-output system 100 includes a character representation vector calculation part 120 which, when a character data set, which is a small-scale data set, is input, a character representation vector corresponding to the characteristics represented in the character data set is output by the input-output model and the character representation space fixed at the input-output operation part 110.Type: ApplicationFiled: April 3, 2018Publication date: December 6, 2018Inventors: Koichi HAMADA, Kazuki FUJIKAWA, Yuya UNNO, Sosuke KOBAYAASHI, Yuta KIKUCHI
-
Publication number: 20180349304Abstract: A technique for handling interrupts in a data processing system includes receiving, at an interrupt presentation controller (IPC), an event notification message (ENM) that specifies an event target number and a number of bits to ignore. In response to a slot being available in an interrupt request queue, the IPC enqueues the ENM in the slot. In response to the ENM being dequeued from the interrupt request queue, the IPC determines a group of virtual processor threads that may be potentially interrupted based on the event target number and the number of bits to ignore specified in the ENM. The event target number identifies a specific virtual processor thread and the number of bits to ignore identifies the number of lower-order bits to ignore with respect to the specific virtual processor thread when determining a group of virtual processor threads that may be potentially interrupted.Type: ApplicationFiled: June 4, 2017Publication date: December 6, 2018Inventors: FLORIAN A. AUERNHAMMER, DANIEL WIND
-
Publication number: 20180349305Abstract: A technique for handling interrupts in a data processing system includes receiving, by an interrupt routing controller (IRC), an event routing message (ERM) that includes an event source number for a notification source with an unserviced interrupt. In response to receiving the ERM, the IRC builds an event notification message (ENM) based on the event source number. The IRC determines a scope for the ENM based on an event target group (ETG) associated with the event source number. The IRC issues the ENM to an interrupt presentation controller (IPC) at the scope associated with the ETG.Type: ApplicationFiled: June 6, 2017Publication date: December 6, 2018Inventors: Florian A. Auernhammer, Wayne M. Barrett, Robert A. Drehmel, Michael S. Siegel
-
Publication number: 20180349306Abstract: A technique for handling interrupts in a data processing system includes receiving, at an interrupt presentation controller (IPC), an event notification message (ENM) that specifies an event target number and a number of bits to ignore. In response to a slot being available in an interrupt request queue, the IPC enqueues the ENM in the slot. In response to the ENM being dequeued from the interrupt request queue, the IPC determines a group of virtual processor threads that may be potentially interrupted based on the event target number and the number of bits to ignore specified in the ENM. The event target number identifies a specific virtual processor thread and the number of bits to ignore identifies the number of lower-order bits to ignore with respect to the specific virtual processor thread when determining a group of virtual processor threads that may be potentially interrupted.Type: ApplicationFiled: November 29, 2017Publication date: December 6, 2018Inventors: FLORIAN A. AUERNHAMMER, DANIEL WIND
-
Publication number: 20180349307Abstract: A technique for handling interrupts in a data processing system includes receiving, by an interrupt routing controller (IRC), an event routing message (ERM) that includes an event source number for a notification source with an unserviced interrupt. In response to receiving the ERM, the IRC builds an event notification message (ENM) based on the event source number. The IRC determines a scope for the ENM based on an event target group (ETG) associated with the event source number. The IRC issues the ENM to an interrupt presentation controller (IPC) at the scope associated with the ETG.Type: ApplicationFiled: November 29, 2017Publication date: December 6, 2018Inventors: Florian A. Auernhammer, Wayne M. Barrett, Robert A. Drehmel, Michael S. Siegel
-
Publication number: 20180349308Abstract: A monolithic integrated circuit that supports multiple industrial Ethernet protocols, fieldbus protocols, and industrial application processing, thereby providing a single hardware platform that may be used to build various automation devices/equipment implemented in an industrial network, such as controllers, field devices, network communication nodes, etc.Type: ApplicationFiled: August 6, 2018Publication date: December 6, 2018Applicant: Schneider Electric Industries SASInventors: Patrice Jaraudias, Jean-Jacques Adragna, Antonio Chauvet, Gary R. Ware
-
Publication number: 20180349309Abstract: A backplane (1) comprising —a first module connector (2d) configured to receive a first printed circuit board module (5d) and including a first connector portion (23d-26d); —a second module connector (2e) configured to receive a second printed circuit board module (5e) and including a second connector portion (23e-26e), the first connector portion (23d-26d) being connected to the second connector portion (23e-26e) through a backplane bus.Type: ApplicationFiled: September 22, 2015Publication date: December 6, 2018Inventor: Miroslaw Pierre KLABA
-
Publication number: 20180349310Abstract: Examples provided herein relate to hot plugging PCIe cards. For example, a field programmable gate array (“FPGA”) communicably coupled to a PCIe bus may detect a new PCIe card physically connected to the PCIe bus. The FPGA may access configuration information stored by the FPGA that is associated with the PCIe bus. The FPGA may determine, based on the accessed configuration information, whether to facilitate connection of the new PCIe card to the PCIe bus. Responsive to determining that connection of the new PCIe card to the PCIe bus should be facilitated, the new PCIe card may be trained to communicate with the PCIe bus and an upstream device communicably coupled to the PCIe bus.Type: ApplicationFiled: May 31, 2017Publication date: December 6, 2018Inventors: Srivani Kor KORIGINJA RAMASWAMY, Yiling ZHANG, Stewart Hoang NGUYEN
-
Publication number: 20180349311Abstract: An apparatus and methods are disclosed for a bidirectional front-end circuit included within a system on chip (SoC). The bidirectional front-end circuit includes a differential bidirectional terminal for receiving and transmitting signals. The bidirectional front-end circuit is configured to provide a first communication path between a first controller and a connector through the differential bidirectional terminal when operating in a first mode. And, the bidirectional front-end circuit is reconfigured to provide a second communication path between a second controller and the connector through the differential bidirectional terminal when operating in a second mode.Type: ApplicationFiled: August 8, 2018Publication date: December 6, 2018Inventors: Zhi Zhu, Xiaohua Kong, Nir Gerber, Christian Josef Wiesner
-
Publication number: 20180349312Abstract: A method, system and computer program product for improving the usability of a calendar application. A calendar client agent receives calendar information, such as meetings, appointments, vacations, tasks, etc. from various systems, such as an electronic mail system, a social networking system, an instant messaging system and a wiki. The calendar client agent evaluates the retrieved calendar information with respect to a set of presentation rules. The calendar client agent then presents the retrieved calendar information in a horizontal bar (also referred to as a “calendar bar”) in the calendar application over a duration of time (e.g., twelve hours of the current day) in relation to the set of presentation rules. In this manner, the user will be able to more easily ascertain which events or activities are scheduled.Type: ApplicationFiled: August 9, 2018Publication date: December 6, 2018Inventors: Paul R. Bastide, Andrew E. Davis, Margo L. Ezekiel, Leah A. Lawrence, Katherine M. Parsons, Jodi Rajaniemi
-
Publication number: 20180349313Abstract: Disclosed herein are a parameter server and a method for sharing distributed deep-learning parameters using the parameter server. The method for sharing distributed deep-learning parameters using the parameter server includes initializing a global weight parameter in response to an initialization request by a master process; performing an update by receiving a learned local gradient parameter from the worker process, which performs deep-learning training after updating a local weight parameter using the global weight parameter; accumulating the gradient parameters in response to a request by the master process; and performing an update by receiving the global weight parameter from the master process that calculates the global weight parameter using the accumulated gradient parameters of the one or more worker processes.Type: ApplicationFiled: May 18, 2018Publication date: December 6, 2018Inventors: Shin-Young AHN, Eun-Ji LIM, Yong-Seok CHOI, Young-Choon WOO, Wan CHOI
-
Publication number: 20180349314Abstract: An IoT device is provided and includes a peripheral Operating System (OS), a peripheral API, and a remote management application. The IoT device configured to provide extended peripheral support for additional peripherals accessible to a terminal in an isolated environment from the terminal environment and the IoT device exposes the extended peripherals as IoT devices accessible over multiple communication channels and the Internet.Type: ApplicationFiled: May 30, 2017Publication date: December 6, 2018Inventors: Robin Ian Gregor Angus, Jamie Cramb, Alexander John Haddow, Richard Han
-
Publication number: 20180349315Abstract: A graphics processing pipeline includes a rasteriser, an early culling tester, a renderer, a late culling tester, and a culling test data buffer that stores data values for use by the early and late culling testers. The testing of fragments by the early and late culling testers is controlled in accordance with a first set of state information indicative of when a culling test operation to be used to determine whether to cull the fragments is to be performed, and a second set of state information indicative of when to determine whether to update the culling test data buffer with data for the fragments based on a culling test operation, allocated to the fragments.Type: ApplicationFiled: May 30, 2017Publication date: December 6, 2018Applicant: ARM LimitedInventors: Frode Heggelund, Toni Viki Brkic, Reimar Gisbert Döffinger
-
Publication number: 20180349316Abstract: A parallel processing apparatus includes: processors; and a network switch, wherein a first processor: generates divided matrix data by dividing the matrix data in such a manner that an overlapping portion is present with each other; transmits the divided matrix data to a second processor; generates first evaluation-value matrix data from the divided matrix data; transmits, to the second processor, first elements in a first overlapping portion of the first evaluation-value matrix data; receives, from the second processor, second elements of a second overlapping portion of second evaluation-value matrix data; calculates first added evaluation data by adding the second elements to the first elements; transmits the first added evaluation data to the second processor; receives, from the second processor, second added evaluation data; and calculates a first C point or a first F point based on the first evaluation-value matrix data which is updated using the second added evaluation data.Type: ApplicationFiled: May 31, 2018Publication date: December 6, 2018Applicant: FUJITSU LIMITEDInventors: Jun FUJISAKI, Ryoji TANDOKORO, Akira HOSOI, Hideyuki Shitara
-
Publication number: 20180349317Abstract: An apparatus and a method is provided. The apparatus includes a polynomial generator, including an input and an output; a first matrix generator, including an input connected to the output of the polynomial generator, and an output; a second matrix generator, including an input connected to the output of the first matrix generator, and an output; a third matrix generator, including a first input connected to the output of the first matrix generator, a second input connected to the output of the second matrix generator, and an output; and a convolution generator, including an input connected to the output of the third matrix generator, and an output.Type: ApplicationFiled: June 1, 2017Publication date: December 6, 2018Inventors: Weiran DENG, Zhengping Jl
-
Publication number: 20180349318Abstract: The present disclosure is applied to a variable group calculation apparatus for calculating an undetermined variable group that simultaneously minimizes a difference value and a data value. The difference value is a difference between an added composite value, which is obtained by adding and combining the undetermined variable group and a dictionary data group, and an observation data group. The data value includes the difference value and a regularization term of the undetermined variable group. The variable group calculation apparatus of the present disclosure includes a convolution unit configured to convert the regularization term to a convolution value for an L1 norm using the undetermined variable group and a mollifier function, and a calculation unit configured to perform the calculation using the regularization term, which is converted to the convolution value by the convolution unit.Type: ApplicationFiled: May 24, 2018Publication date: December 6, 2018Applicant: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Shintaro Yoshizawa, Norimasa Kobori
-
Publication number: 20180349319Abstract: Systems and methods for an imaging system including a first sensor to acquire a sequence of images of a first modality. A memory to store a first convolutional memory matrix. Wherein each element of the first convolutional memory matrix is a convolutional function of correspondingly located elements of coefficient matrices of convolutional representation of the images of the first modality, and to store a first dictionary matrix including atoms of the images of the first modality. A processor to transform a first image of a scene acquired by the first sensor as a convolution of the first dictionary matrix and a first coefficient matrix, to update the elements of the first convolutional memory matrix with the convolutional function of correspondingly located non-zero elements of the first coefficient matrix, and to update the dictionary matrix using the updated first convolutional memory matrix.Type: ApplicationFiled: June 29, 2017Publication date: December 6, 2018Applicant: Mitsubishi Electric Research Laboratories, Inc.Inventors: Ulugbek Kamilov, Kevin Degraux, Petros T. Boufounos, Dehong Liu
-
Publication number: 20180349320Abstract: According to one embodiment, a time series data analysis device includes a feature vector calculator and an updater. The feature vector calculator calculates feature amounts of a plurality of feature waveforms based on distances between a partial time series and the feature waveforms, the partial time series being data belonging to each of a plurality of intervals which are set in a plurality of pieces of time series data. The updater updates the feature waveforms based on the feature amounts.Type: ApplicationFiled: March 9, 2018Publication date: December 6, 2018Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Akihiro YAMAGUCHI, Takeichiro NISHIKAWA
-
Publication number: 20180349321Abstract: A parallel processing method for performing operation for each row data of matrix data and outputting solution data for each row, for solving matrix simultaneous linear equations, by using at least one of the Gauss-Seidel method and the SOR method using matrix data, the parallel operation method including: performing parallel operation on elements at a right side of diagonal elements of the matrix data for each of rows of the row data to calculate right-side operation result data; performing parallel operation on elements at a left side of the diagonal elements of the matrix data for each of rows of the row data using a solution data to calculate left-side operation result data; and calculating the solution data of the diagonal elements of the matrix data of the row data using the right-side operation result data and the left-side operation result data.Type: ApplicationFiled: June 1, 2018Publication date: December 6, 2018Applicant: FUJITSU LIMITEDInventors: Hideyuki Shitara, Akira HOSOI, Ryoji TANDOKORO
-
Publication number: 20180349322Abstract: Generation of models in real time embedded systems that approximate non-embedded models while reducing a complexity associated with the non-embedded models is provided herein. A system can comprise a memory coupled to a processor. The memory stores executable components executed by the processor. The executable components can comprise an evaluation manager component that identifies an input parameter of a first model based on a defined output parameter of the first model and a relation manager component that determines one or more relations in the first model. Relations of the one or more relations can comprise an intermediary parameter determined based on the input parameter and the defined output parameter of the first model. Further, the system can comprise a model generator manager component that generates a second model that approximates the first model and includes a replication of the one or more relations of the first model.Type: ApplicationFiled: June 6, 2017Publication date: December 6, 2018Inventors: MacKenzie Dylan Cumings, Robert Schroer, Sean Hwang, Nicholas Visser, Sridhar Adibhatla
-
Publication number: 20180349323Abstract: Systems, device and techniques are disclosed for outlier discovery system selection. A set of time series data including time series data objects may be received. A sample of time series data objects may be extracted from the time series data. The sample of time series data objects may be decomposed into sub-components. Statistical classification may be used to select an outlier discovery system based on the sub-components. A neural network may be used to select an outlier discovery system based on the sub-components. A level of error of the neural network may be determined based on a comparison of the outlier discovery system selection made using statistical classification and the outlier discovery system selection made by the neural network. Weight of the neural network may be updated based on the level of error of the neural network.Type: ApplicationFiled: May 30, 2017Publication date: December 6, 2018Inventors: Ajay Krishna BORRA, Manpreet SINGH
-
Publication number: 20180349324Abstract: The present disclosure describes a system, method, and computer program for real-time and computationally efficient calculation of a recommended value range for a quote variable, such as price, discount, volume, or closing time. The system uses the highest-density interval (HDI) of probability density function (PDF) as a recommended or suggested value range for a quote variable. PDFs for the quote variable are precomputed for groups of related inputs, and each PDF is summarized as an array of discrete points. A dimension reduction technique is applied to the PDF inputs in both the training and real-time (non-training) phases to reduce the number of possible combinations of PDFs. During a quote-creation process, a PDF look-up table enables the system to efficiently identify an applicable PDF from the group of precomputed PDFs based on reduced input values.Type: ApplicationFiled: June 6, 2017Publication date: December 6, 2018Inventors: Kirk G. Krappé, Neehar Giri, Man Chan, Isabelle Chai, Rahul Choudhry, Kitae Kim, Stanley Poon, Brian Li, Geeta Deodhar, Elliott Yama
-
Publication number: 20180349325Abstract: Hardware for speeding up MCMC is realized. An information processing apparatus includes a plurality of Ising chips and a controller that controls the plurality of Ising chips. Each of the plurality of Ising chips includes a plurality of units, and each of the plurality of units retains a spin state. The controller instructs one set of Ising chips among the plurality of Ising chips to compare values of spin states of corresponding units and instructs the one set of Ising chip to invert values of a portion of spins among spins having different values of spin states of the corresponding units.Type: ApplicationFiled: February 28, 2018Publication date: December 6, 2018Inventors: Takuya Okuyama, Masanao Yamaoka
-
Publication number: 20180349326Abstract: Relationship extraction between descriptors in one or more lists of weather condition descriptors, and adverse event descriptors within unstructured data sources using natural language processing. Medical condition descriptor may be a descriptor that may be used to further extract relationships between weather condition descriptors and adverse event descriptors. A data object is generated, according to a data model, based on the extracted relationships between the descriptors. A set of candidate unstructured documents containing the extracted relationship between the descriptors is retrieved and filtered by selecting unstructured documents that include a precautionary measure descriptor. The filtered precautionary measure descriptors are presented to a user in a summarized message to a user device.Type: ApplicationFiled: May 30, 2017Publication date: December 6, 2018Inventors: Shilpi Ahuja, Sheng Hua Bao, Rashmi Gangadharaiah
-
Publication number: 20180349327Abstract: A text error correction method and a text error correction apparatus based on a recurrent neural network of artificial intelligence are provided. The method includes: acquiring text data to be error-corrected; performing error correction on the text data to be error-corrected by using a trained recurrent neural network model so as to generate error-corrected text data.Type: ApplicationFiled: December 28, 2017Publication date: December 6, 2018Inventors: Chunjie YANG, Shujie YAO
-
Publication number: 20180349328Abstract: In a method for generating a presentation, a computer communicates data representative of a plurality of presentation components. A computer receives data representative of a presentation component selected from the plurality of presentation components. A computer retrieves a predefined rule associated with the selected presentation component. A computer applies the rule associated with the selected presentation component. A computer inserts the presentation component in a presentation.Type: ApplicationFiled: August 13, 2018Publication date: December 6, 2018Inventor: Ala Mahafzah
-
Publication number: 20180349329Abstract: Applications may be created and registered to an online ecosystem and then execute within individual web applications such as productivity applications, communication applications, collaboration applications, and so on. These non-native applications may be enabled to interact with files and provide custom experiences for a user. The applications may also be enabled to interact with additional information discovered about the user within the ecosystem to provide custom experiences. The applications may further be enabled to create custom workflows to allow users to accomplish new tasks.Type: ApplicationFiled: June 20, 2018Publication date: December 6, 2018Applicant: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Dorrene BROWN, Joey MASTERSON, Nate WADDOUPS, Shreedhar THIRUGNANASAMBANTHAM, Xiao WU, Jay RATHI, Mauricio ORDONEZ, Darren MILLER, Ela MALANI, John WANG, Sreekanth LINGANNAPETA, Gabriel HALL
-
Publication number: 20180349330Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer-readable storage medium, and including a computer-implemented method for providing creatives. The method comprises identifying, using one or more processors, a creative for processing, the creative including a title portion and a body portion, where the body portion includes a visual uniform resource locator (URL), the visual URL comprising a visual portion and a link to a resource. The method further comprises evaluating the visual URL for inclusion in the title portion, including determining when promotion of the visual URL satisfies one or more promotion criterion, and if so, promoting the visual URL for inclusion in the title portion. The method further comprises providing the creative including the title portion with the promoted visual URL.Type: ApplicationFiled: August 13, 2018Publication date: December 6, 2018Inventors: Vivek Raghunathan, David G. Arthur, Rohan Jain, Emily Kay Moxley, Shivakumar Venkataraman, Nipun Kwatra, Brett A. McLarnon, David J. Ganzhorn
-
Publication number: 20180349331Abstract: A system and method is illustrated for platform-independent rendering of a document in a web browser supporting a two-dimensional (2D) canvas. The system and method includes obtaining the document, wherein the document includes text characters, text elements, and associated style information including at least one font, determining that font metrics do not exist, and obtaining the font metrics for the at least one font, using the font metrics and the text elements to determine how the document is divided into page criteria, determining a font file does not exist, and obtaining the font file for the at least one font, and rendering the document by drawing glyphs associated with the text characters in the 2D canvas, using the font file and the page criteria, so that the at least one font and the page criteria are platform-independent.Type: ApplicationFiled: June 7, 2018Publication date: December 6, 2018Inventor: Wang Xin