Patents Examined by Behrooz Senfi
  • Patent number: 8885050
    Abstract: Systems and methods of perceptual quality monitoring of video information, communications, and entertainment that can estimate the perceptual quality of video with high accuracy, and can be used to produce quality scores that better correlate with subjective quality scores of an end user. The systems and methods of perceptual quality monitoring of video can generate, from an encoded input video bitstream, estimates of one or more quality parameters relating to the video, such as the coding bit rate parameter, the video frame rate parameter, and the packet loss rate parameter, and provide these video quality parameter estimates to a predetermined video quality estimation model. Because the estimates of the video quality parameters are generated from the encoded input video bitstream as it is being received, the systems and methods are suitable for use as QoE monitoring tools.
    Type: Grant
    Filed: February 11, 2011
    Date of Patent: November 11, 2014
    Assignee: Dialogic (US) Inc.
    Inventors: Beibei Wang, Dekun Zou, Ran Ding, Tao Liu, Sitaram Bhagavathy, Niranjan Narvekar, Jeffrey A. Bloom, Glenn L. Cash
  • Patent number: 8885708
    Abstract: Systems and methods of developing and/or implementing multimedia applications. The system provides an extensible framework including an application layer, a framework utility layer, and a core engine layer. The framework utility layer includes an application programming interface, a video codec sub-framework (XCF), a video packetization sub-framework (XPF), and a video/text overlay sub-framework (XOF). The core engine layer includes one or more core codec engines and one or more core rendering engines. The XCF, XPF, and XOF sub-frameworks are effectively decoupled from the multimedia applications executing on the application layer, and the core codec and rendering engines of the core engine layer, allowing the XCF, XPF, and XOF sub-frameworks and core codec/rendering engines to be independently extensible. The system also fosters enhanced reuse of existing multimedia applications across a plurality of multimedia systems.
    Type: Grant
    Filed: December 11, 2009
    Date of Patent: November 11, 2014
    Assignee: Dialogic Corporation
    Inventors: John R. Hayden, Robert D. Kirnum, Joseph A. Fisher, Brian M. Nixon, Arun V. Eledath, Ranjan Singh
  • Patent number: 8879627
    Abstract: In order to improve the encoding efficiency while avoiding an increase in the size or memory band of a frame memory and having adaptability in the encoding/decoding processing of a moving picture, a bit length extension converter converts a target picture having a bit length N into an extended target picture having a bit length M, a compressor encodes the converted picture, and an expander restores the encoded picture. Then, a bit length reduction converter converts the picture into a reproduction picture having a bit length L smaller than the bit length M, and this reproduction picture is stored in a frame memory as a reference picture.
    Type: Grant
    Filed: March 31, 2011
    Date of Patent: November 4, 2014
    Assignee: NTT DoCoMo, Inc.
    Inventors: Yoshinori Suzuki, Choong Seng Boon, Thiow Keng Tan
  • Patent number: 8878903
    Abstract: The disclosure relates to a method for reconstruction of a three-dimensional image of an object. A first image is acquired of the object lit by a luminous flux having, in a region including the object, a luminous intensity dependant on the distance, with a light source emitting the luminous flux. A second image is acquired of the object lit by a luminous flux having, in a region including the object, a constant luminous intensity. For each pixel of a three-dimensional image, a relative distance of a point of the object is determined as a function of the intensity of a pixel corresponding to the point of the object in each of the acquired images.
    Type: Grant
    Filed: October 13, 2011
    Date of Patent: November 4, 2014
    Assignee: STMicroelectronics (Grenoble) SAS
    Inventors: Cédric Tubert, Jérôme Vaillant
  • Patent number: 8872913
    Abstract: The present invention discloses a system for positioning micro tool of micro machine is provided in this invention, wherein the system comprises a stereo-photographing device, an image analysis system, a PC-based controller, and a micro machine. The image analysis system can analyze position of the micro tool of the micro machine and work piece by using algorithm, and the micro tool is then positioned to a pre-determined location. A method for positioning micro tool of the micro machine is also provided in this invention.
    Type: Grant
    Filed: January 13, 2010
    Date of Patent: October 28, 2014
    Assignee: Chung Yuan Christian University
    Inventors: Shih-Ming Wang, Chia-You Chung, Chih-Chun Lin
  • Patent number: 8872980
    Abstract: A method and system for accumulating stillness characteristics is presented. The method and system generates field stillness characteristics for a current pixel of a current field. The field stillness characteristic is accumulated with an accumulated stillness characteristic that corresponds to a pixel location of the current pixel. The accumulated stillness characteristic includes stillness information regarding previous pixels of previous fields in the same pixel location as the current pixel.
    Type: Grant
    Filed: August 30, 2010
    Date of Patent: October 28, 2014
    Assignee: WZ Technology Inc.
    Inventors: Ge Zhu, Edward Chen, Zhengjun Gong, Qing Yang
  • Patent number: 8873640
    Abstract: A multi-format digital video production system enables a user to process an input video program to produce an output version of the program in a final format which may have a different frame rate, pixel dimensions, or both. An internal production format of 24 fps is preferably chosen to provide the greatest compatibility with existing and planned formats associated with HDTV standard 4:3 or widescreen 16:9 high-definition television, and film. Images are re-sized horizontally and vertically by pixel interpolation, thereby producing larger or smaller image dimensions so as to fill the particular needs of individual applications. Frame rates are adapted by inter-frame interpolation or by traditional schemes, including “3:2 pull-down” for 24-to-30 fps conversions. Simple speed-up (for 24-to-25 conversions) or slow-down (for 25-to-24 conversions) for playback, or by manipulating the frame rate itself using a program storage facility with asynchronous reading and writing capabilities.
    Type: Grant
    Filed: January 29, 2013
    Date of Patent: October 28, 2014
    Inventor: Kinya Washino
  • Patent number: 8873638
    Abstract: An apparatus for enabling provision of multi-thread video decoding may include at least one processor and at least one memory including computer program code. The at least one memory and the computer program code may be configured, with the processor, to cause the apparatus to perform at least assigning decoding of a respective video frame to a corresponding thread for each core processor of a multi-core processor in which each respective video frame is divided into macroblock rows, resolving dependencies for each respective video frame at a macroblock row level, and providing synchronization for video decoding of each corresponding thread at the macroblock row level. A corresponding method and computer program product are also provided.
    Type: Grant
    Filed: February 11, 2011
    Date of Patent: October 28, 2014
    Assignee: Nokia Corporation
    Inventors: Chirantan Kumar, Sudhakara Rao Kimidi
  • Patent number: 8866884
    Abstract: An image processing apparatus includes an image input unit that inputs a two-dimensional image signal, a depth information output unit that inputs or generates depth information of image areas constituting the two-dimensional image signal, an image conversion unit that receives the image signal and the depth information from the image input unit and the depth information output unit, and generates and outputs a left eye image and a right eye image for realizing binocular stereoscopic vision, and an image output unit that outputs the left and right eye images. The image conversion unit extracts a spatial feature value of the input image signal, and performs an image conversion process including an emphasis process applying the feature value and the depth information with respect to the input image signal, thereby generating at least one of the left eye image and the right eye image.
    Type: Grant
    Filed: December 3, 2010
    Date of Patent: October 21, 2014
    Assignee: Sony Corporation
    Inventors: Atsushi Ito, Toshio Yamazaki, Seiji Kobayashi
  • Patent number: 8866888
    Abstract: A 3D positioning apparatus is used for an object that includes feature points and a reference point. The object undergoes movement from a first to a second position. The 3D positioning apparatus includes: an image sensor for capturing images of the object; and a processor for calculating, based on the captured images, initial coordinates of each feature point when the object is in the first position, initial coordinates of the reference point, final coordinates of the reference point when the object is in the second position, and final coordinates of each feature point. The processor calculates 3D translational information of the feature points using the initial and final coordinates of the reference point, and 3D rotational information of the feature points using the initial and final coordinates of each feature point. A 3D positioning method is also disclosed.
    Type: Grant
    Filed: December 29, 2009
    Date of Patent: October 21, 2014
    Assignee: National Taiwan University
    Inventors: Homer H. Chen, Ping-Cheng Chi
  • Patent number: 8848800
    Abstract: Disclosed herein is a signal processing apparatus and method based on multiple textures using video sensor excitation signals. For this, an input signal that includes a video signal and a sensor signal is divided into unit component signals, and one is selected from a plurality of frames of each unit component signal as a seed signal. A plurality of texture points are detected from the seed signal. The texture points are tracked from the frames of the unit component signal and then spatio-temporal location transform variables for the texture points are calculated. Texture signals are defined using texture points at which the spatio-temporal location transform variables correspond to one another. Each of the texture signals is defined as a sum of a plurality of texture blocks that are outputs of texture synthesis filters that receive video sensor excitation signals as inputs.
    Type: Grant
    Filed: August 31, 2011
    Date of Patent: September 30, 2014
    Assignee: Electronics and Telecommunications Research Institute
    Inventor: Sung-Hoon Hong
  • Patent number: 8848804
    Abstract: A video decoder includes an entropy decoding device that includes a first processor that generates entropy decoded (EDC) data from an encoded video signal, wherein the encoded video signal includes a plurality of video layers, wherein the entropy decoding device includes a slice dependency module that generates slice dependency data and wherein the first processor entropy decodes a selected subset of the plurality of video layers, based on the slice dependency data. A general video decoding device includes a second processor that generates a decoded video signal from the EDC data.
    Type: Grant
    Filed: March 7, 2011
    Date of Patent: September 30, 2014
    Assignee: VIXS Systems, Inc
    Inventors: Limin (Bob) Wang, Yinxia (Michael) Yang
  • Patent number: 8842738
    Abstract: Disclosed herein is a signal processing apparatus and method based on multiple textures using video audio excitation signals. For this, an input signal that includes a video signal and an audio signal is divided into unit component signals, and one is selected from a plurality of frames of each unit component signal as a seed signal. A plurality of texture points are detected from the seed signal. The texture points are tracked from the frames of the unit component signal and then spatio-temporal location transform variables for the texture points are calculated. Texture signals are defined using texture points at which the spatio-temporal location transform variables correspond to one another. Each of the texture signals is defined as a sum of a plurality of texture blocks that are outputs of texture synthesis filters that receive video audio excitation signals as inputs.
    Type: Grant
    Filed: August 31, 2011
    Date of Patent: September 23, 2014
    Assignee: Electronics and Telecommunications Research Institute
    Inventor: Sung-Hoon Hong
  • Patent number: 8842727
    Abstract: A system for processing an audio/video program to output at a desired display rate includes a computer including RAM, ROM and a processor. The system has an input receiving an input video program in a first interlaced format. The computer has hardware or software functioning to: storing the input program, at least temporarily, in the first format; de-interlacing the input video program to generate a video program in a first progressive format having progressive frames, each progressive frame being derived from a respective one, and only one, of the fields in the first interlaced format; removing or repeating some of the frames of the video program in the first progressive format generating a program in a second progressive format; outputting the program in the second progressive format, wherein the display rate of the program is at least 48 frames-per-second.
    Type: Grant
    Filed: July 12, 2012
    Date of Patent: September 23, 2014
    Inventor: Kinya Washino
  • Patent number: 8837579
    Abstract: Embodiments of the present invention include a set of processes and systems for implementing a forward weight-adaptive over-complete transform of an image/video frame, an inverse weight-adaptive over-complete transform of an image/video frame, and fast and low-memory processes for performing the forward weight-adaptive over-complete transform, processing coefficients in the transform domain and performing the inverse weight-adaptive over-complete transform simultaneously.
    Type: Grant
    Filed: September 26, 2008
    Date of Patent: September 16, 2014
    Assignee: NTT DoCoMo, Inc.
    Inventors: Sandeep Kanumuri, Onur G. Guleryuz, Akira Fujibayashi, M. Reha Civanlar
  • Patent number: 8836766
    Abstract: A method for preparing a spatial coded slide image in which a pattern of the spatial coded slide image is aligned along epipolar lines at an output of a projector in a system for 3D measurement, comprising: obtaining distortion vectors for projector coordinates, each vector representing a distortion from predicted coordinates caused by the projector; retrieving an ideal pattern image which is an ideal image of the spatial coded pattern aligned on ideal epipolar lines; creating a real slide image by, for each real pixel coordinates of the real slide image, retrieving a current distortion vector; removing distortion from the real pixel coordinates using the current distortion vector to obtain ideal pixel coordinates in the ideal pattern image; extracting a pixel value at the ideal pixel coordinates in the ideal pattern image; copying the pixel value at the real pixel coordinates in the real slide image.
    Type: Grant
    Filed: November 2, 2012
    Date of Patent: September 16, 2014
    Assignee: Creaform Inc.
    Inventors: Patrick Hebert, Félix Rochette
  • Patent number: 8831093
    Abstract: In some embodiments, macroblock-level encoding parameters are assigned to weighted linear combinations of corresponding content-category-level encoding parameters. For example, a macroblock quantization parameter (QP) modulation is set to a weighted linear combination of content category QP modulations. Content categories may identify potentially overlapping content types such as sky, water, grass, skin, and red content. The combination weights may be similarity measures describing macroblock similarities to content categories. A macroblock may be associated with multiple content categories, with different similarity levels for different content categories. A similarity measure for a given macroblock with respect to a content category may be defined as a number (between 0 and 8) of neighboring macroblocks that meet a similarity condition, provided the macroblock meets a qualification condition. The similarity condition may be computationally simpler than the qualification condition.
    Type: Grant
    Filed: April 2, 2012
    Date of Patent: September 9, 2014
    Assignee: Geo Semiconductor Inc.
    Inventors: Ilie Garbacea, Lulin Chen, Jose R. Alvarez
  • Patent number: 8831094
    Abstract: Disclosed herein is a video processing apparatus and method based on multiple texture images, which can process videos with optimal video quality at a low transfer rate. For this, an input video is divided into shot segments, and one is selected from a plurality of frames of each shot segment as a seed image. A plurality of texture points are detected from the seed image. The plurality of texture points are tracked from the plurality of frames of the shot segment and then spatio-temporal location transform variables for the respective texture points are calculated. A plurality of texture images are defined using texture points at which the spatio-temporal location transform variables correspond to one another.
    Type: Grant
    Filed: August 31, 2011
    Date of Patent: September 9, 2014
    Assignee: Electronics and Telecommunications Research Institute
    Inventor: Sung-Hoon Hong
  • Patent number: 8830312
    Abstract: Systems and methods for tracking human hands using parts based template matching within bounded regions are described. One embodiment of the invention includes a processor; an image capture system configured to capture multiple images of a scene; and memory containing a plurality of templates that are rotated and scaled versions of a finger template. A hand tracking application configures the processor to: obtain a reference frame of video data and an alternate frame of video data from the image capture system; identify corresponding pixels within the reference and alternate frames of video data; identify at least one bounded region within the reference frame of video data containing pixels having corresponding pixels in the alternate frame of video data satisfying a predetermined criterion; and detect at least one candidate finger within the at least one bounded region in the reference frame of video data.
    Type: Grant
    Filed: July 15, 2013
    Date of Patent: September 9, 2014
    Assignee: Aquifi, Inc.
    Inventors: Britta Hummel, Giridhar Murali
  • Patent number: RE45176
    Abstract: A tactile sensor includes a photosensing structure, a volume of elastomer capable of transmitting an image, and a reflective skin covering the volume of elastomer. The reflective skin is illuminated through the volume of elastomer by one or more light sources, and has particles that reflect light incident on the reflective skin from within the volume of elastomer. The reflective skin is geometrically altered in response to pressure applied by an entity touching the reflective skin, the geometrical alteration causing localized changes in the surface normal of the skin and associated localized changes in the amount of light reflected from the reflective skin in the direction of the photosensing structure. The photosensing structure receives a portion of the reflected light in the form of an image, the image indicating one or more features of the entity producing the pressure.
    Type: Grant
    Filed: October 3, 2013
    Date of Patent: October 7, 2014
    Assignee: Massachusetts Institute of Technology
    Inventor: Edward Adelson