Involving Pattern Matching Patents (Class 348/422.1)
-
Patent number: 10417525Abstract: A client device configured with a neural network includes a processor, a memory, a user interface, a communications interface, a power supply and an input device, wherein the memory includes a trained neural network received from a server system that has trained and configured the neural network for the client device. A server system and a method of training a neural network are disclosed.Type: GrantFiled: March 19, 2015Date of Patent: September 17, 2019Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Zhengping Ji, Ilia Ovsiannikov, Yibing Michelle Wang, Lilong Shi
-
Patent number: 10212307Abstract: Provided is a data transmission system that accurately performs transmitting and receiving of job data. A cloud server transmits job data, a vibration-detection sensor of a gateway detects vibration that is applied to the gateway, a vibration-detection sensor of an MFP detects vibration that is applied to the MFP, the gateway stores job data that is received from the cloud server in memory of the gateway, and then transmits the job data to the MFP. Moreover, the gateway, when the value of vibration-detection data of the vibration-detection sensor is equal to or less than a specified threshold value, stores the job data from the cloud server in memory, and when the value of vibration-detection data of the vibration-detection sensor equal to or less than a specified vale, transmits the job data that is stored in memory to the MFP.Type: GrantFiled: January 27, 2018Date of Patent: February 19, 2019Assignee: KYOCERA Document Solutions Inc.Inventor: Takatoshi Nishio
-
Patent number: 9747916Abstract: In a CELP-type speech coding apparatus, switching between an orthogonal search of a fixed codebook and a non-orthogonal search is performed in a practical and effective manner. The CELP-type speech coding apparatus includes a parameter quantizer that selects an adaptive codebook vector and a fixed codebook vector so as to minimize an error between a synthesized speech signal and an input speech signal. The parameter quantizer includes a fixed codebook searcher that switches between the orthogonal fixed codebook search and the non-orthogonal fixed codebook search based on a correlation value between a target vector for the fixed codebook search and the adaptive codebook vector obtained as a result of a synthesis filtering process.Type: GrantFiled: January 20, 2016Date of Patent: August 29, 2017Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICAInventors: Hiroyuki Ehara, Takako Hori
-
Patent number: 8421821Abstract: A 3D graphics rendering pipeline is used to carry out data comparisons for motion estimation in video data encoding. Video data for the pixel block of the video frame currently being encoded is loaded into the output buffers of the rendering pipeline. The video data for the comparison pixel blocks from the reference video frame is stored as texture map values in the texture cache of the rendering pipeline. Once the sets of pixel data for comparison have been stored, the rendering pipeline is controlled to render a primitive having fragment positions and texture coordinates corresponding to the data values that it is desired to compare. As each fragment is rendered, the stored and rendered fragment data is compared by fragment compare unit and the determined differences in the data values are accumulated in an error term register.Type: GrantFiled: December 22, 2011Date of Patent: April 16, 2013Assignee: Arm Norway ASInventors: Jorn Nystad, Edvard Sorgard, Borgar Ljosland, Mario Blazevic
-
Patent number: 8111300Abstract: Systems and methods to selectively combine video frame image data are disclosed. First image data corresponding to a first video frame and second image data corresponding to a second video frame are received from an image sensor. The second image data is adjusted by at least partially compensating for offsets between portions of the first image data with respect to corresponding portions of the second image data to produce adjusted second image data. Combined image data corresponding to a combined video frame is generated by performing a hierarchical combining operation on the first image data and the adjusted second image data.Type: GrantFiled: April 22, 2009Date of Patent: February 7, 2012Assignee: QUALCOMM IncorporatedInventors: Hau Hwang, Hsiang-Tsun Li, Kalin M. Atanassov
-
Patent number: 8106921Abstract: A 3D graphics rendering pipeline is used to carry out data comparisons for motion estimation in video data encoding. Video data for the pixel block of the video frame currently being encoded is loaded into the output buffers of the rendering pipeline. The video data for the comparison pixel blocks from the reference video frame is stored as texture map values in the texture cache of the rendering pipeline. Once the sets of pixel data for comparison have been stored, the rendering pipeline is controlled to render a primitive having fragment positions and texture coordinates corresponding to the data values that it is desired to compare. As each fragment is rendered, the stored and rendered fragment data is compared by fragment compare unit and the determined differences in the data values are accumulated in an error term register.Type: GrantFiled: August 20, 2004Date of Patent: January 31, 2012Assignee: Arm Norway ASInventors: Jorn Nystad, Edvard Sorgard, Borgar Ljosland, Mario Blazevic
-
Patent number: 7676105Abstract: According to some embodiments, a set of images may be determined, including at least one training image and at least one image to be provided to a viewer. A reduced training image may be created based on the training image, and at least one enlarging parameter may be calculated based on a portion of the training image and a corresponding portion of the reduced training image. The enlarging parameter may then be used to facilitate an enlargement of a reduced image to be provided to the viewer.Type: GrantFiled: May 31, 2005Date of Patent: March 9, 2010Assignee: Intel CorporationInventors: Victor L. Eruhimov, Alexander D. Krahknov
-
Patent number: 7391918Abstract: According to the invention, quantization encoding is conducted using the probability density function of the source, enabling fixed, variable and adaptive rate encoding. To achieve adaptive encoding, an update is conducted with a new observation of the data source, preferably with each new observation of the data source. The current probability density function of the source is then estimated to produce codepoints to vector quantize the observation of the data source.Type: GrantFiled: May 16, 2007Date of Patent: June 24, 2008Assignee: The Regents of the University of CaliforniaInventors: Anand D. Subramaniam, Bhaskar D. Rao
-
Patent number: 7263511Abstract: Exemplary embodiments of the present invention include a method for creating a user metric pattern. Such embodiments typically include receiving, within the network, a plurality of disparate user metrics, determining that the plurality of disparate user metrics received within the network do not match a predetermined metric pattern, and saving the plurality of disparate user metrics as a new metric pattern. In many embodiments, determining that the plurality of disparate user metrics received within the network do not match a predetermined metric pattern includes comparing the plurality of disparate user metrics with a plurality of metrics associated with the predetermined metric pattern.Type: GrantFiled: October 23, 2003Date of Patent: August 28, 2007Assignee: International Business Machines CorporationInventors: William Kress Bodin, Michael John Burkhart, Daniel G. Eisenhauer, Daniel Mark Schumacher, Thomas J. Watson
-
Patent number: 7236640Abstract: According to the invention, quantization encoding is conducted using the probability density function of the source, enabling fixed, variable and adaptive rate encoding. To achieve adaptive encoding, an update is conducted with a new observation of the data source, preferably with each new observation of the data source, preferably with each new observation of the data source. The current probability density function of the source is then estimated to produce codepoints to vector quantize the observation of the data source.Type: GrantFiled: August 17, 2001Date of Patent: June 26, 2007Assignee: The Regents of the University of CaliforniaInventors: Anand D. Subramaniam, Bhaskar D. Rao
-
Patent number: 6850639Abstract: The present invention relates to a color space quantization descriptor structure for performing quantization of a color space in order to search a multimedia based on contents when multimedia feature information is color information, particularly when the multimedia feature information is the color information and the search is about a moving image or a still image, the color space quantization descriptor structure according to the present invention has a tree structure divided into a plurality steps of superior color spaces and subordinate color spaces, and the tree structure recursively comprises a plurality of subordinate color spaces. Accordingly, the present invention can improve performance of a still image and a moving image search apparatus, perform the quantization of a n dimensional color space such as a H, S, V, RGB, HMMD color space etc., and perform the different step quantization corresponding to each feature.Type: GrantFiled: December 21, 2000Date of Patent: February 1, 2005Assignee: LG Electronics Inc.Inventors: Jung Min Song, Hyeon Jun Kim
-
Publication number: 20040101055Abstract: A method for decoding video data blocks using variable length codes, comprising transforming information about the spatial frequency distribution of a video data block into pixel values. Prior to said transformation, a first reference value (Xref) representing the abruptness of variations in information about spatial frequency distribution within the block is generated, after said transformation, a second reference value (&Dgr;) representing the abruptness of variation in certain information between the block and at least one previously transformed video data block is generated. The first reference value (Xref) is compared to a first threshold value (TH1) and the second reference value (&Dgr;) to a second threshold value (TH2); and as a response to either of the first (Xref) and second reference values (&Dgr;) being greater than the first (TH1) and respectively the second threshold value (TH2), an error in the block is detected.Type: ApplicationFiled: October 28, 2003Publication date: May 27, 2004Inventor: Ari Hourunranta
-
Publication number: 20020118752Abstract: A moving picture encoding system capable of bit rate control, by which moving pictures are encoded while maintaining high quality even when there are substantial changes in the size of objects and the characteristics of texture is provided. A predictive area calculating parameter extracting means obtains a predictive area calculating parameter to describe a function that indicates temporal variations in the area based on the history of the area data of an object. Besides, a bit number model parameter calculating means finds a bit number model parameter to describe a parameter for a bit number model used in modeling the generated bit number per unit area. A target bit number calculating means estimates a predictive value of the generated bit number for the uncoded VOPs based on the predictive area calculating parameter and the predictive bit number calculating parameter, and accordingly, allocates the remaining allocatable bits to decide a target bit number for the next VOP to be encoded.Type: ApplicationFiled: December 21, 2001Publication date: August 29, 2002Applicant: NEC CORPORATIONInventor: Ryoma Oami
-
Patent number: 6219382Abstract: Every frame represented by a moving picture signal is divided into blocks. Calculation is made as to a number of pixels forming portions of a caption in each of the blocks. The calculated number of pixels is compared with a threshold value. When the calculated number of pixels is equal to or greater than the threshold value, it is decided that the related block is a caption-containing block. Detection is made as to a time interval related to the moving picture signal during which every frame represented by the moving picture signal has a caption-containing block. A 1-frame-corresponding segment of the moving picture signal is selected which represents a caption-added frame present in the detected time interval.Type: GrantFiled: November 21, 1997Date of Patent: April 17, 2001Assignee: Matsushita Electric Industrial Co., Ltd.Inventors: Yasuhiro Kikuchi, Shin Yamada, Akiyoshi Tanaka