Determining The Position Of An Object Patents (Class 382/291)
  • Patent number: 8682083
    Abstract: A regression testing system comprises an automatic test tool configured to capture a first web screen shot and a second web screen shot of a webpage, where the webpage has undergone an update or edit. The regression testing system also comprises a visual comparator configured to identify similar areas in the first web screen shot and the second web screen shot. The visual comparator receives, and compares characteristics of, the web screen shots. Furthermore, the regression testing system generates a report with marked different characteristics between the first and second web screen shots. The regression testing system identifies similar areas in the first and second web screen shots shot even if the similar areas are at different locations within the web screen shots. The comparison performed by the visual comparator includes performing a pixel comparison combined with a marking algorithm to group differences in smaller, related but separate areas.
    Type: Grant
    Filed: June 30, 2011
    Date of Patent: March 25, 2014
    Assignee: American Express Travel Related Services Company, Inc.
    Inventors: Krishna Bihari Kumar, Keshav A. Narsipur, Hans-Jurgen Greiner
  • Patent number: 8682106
    Abstract: An information processing method includes acquiring an image of an object captured by an imaging apparatus, acquiring an angle of inclination measured by an inclination sensor mounted on the object or the imaging apparatus, detecting a straight line from the captured image, and calculating a position and orientation of the object or the imaging apparatus, on which the inclination sensor is mounted, based on the angle of inclination, an equation of the detected straight line on the captured image, and an equation of a straight line in a virtual three-dimensional space that corresponds to the detected straight line.
    Type: Grant
    Filed: June 13, 2008
    Date of Patent: March 25, 2014
    Assignee: Canon Kabushiki Kaisha
    Inventor: Daisuke Kotake
  • Patent number: 8675911
    Abstract: The electro-optical system for determining position and orientation of a mobile part comprises a fixed projector having a center of projection (O) and a mobile part. The projector is rigidly linked with a virtual image plane, and the mobile part is rigidly linked with two linear sensors defining a first and a second direction vector. The fixed part projects onto the image plane and onto the sensors patterns, not represented, forming at least two secant networks of at least three segments that are each parallel. The two electro-optical devices are coplanar and their directions secant, the orientation and the position of the mobile part are determined by calculating the positions of the projections on the image plane of a first triple of points comprising the projections of three points.
    Type: Grant
    Filed: September 25, 2009
    Date of Patent: March 18, 2014
    Assignee: Thales
    Inventors: Bruno Barbier, Siegfried Rouzes
  • Patent number: 8675047
    Abstract: A detection device of a planar area is provided. The detection device includes an image obtaining section for obtaining a left image and a right image; a planar area aligning section for setting given regions of interest to the obtained left image and the right image, and through use of a geometric transform function that matches the region of interest of the right image with the region of interest of the left image, performing geometric transform to generate a geometric transform image; and a planar area detecting section for detecting a planar area based on the geometric transform image and the region of interest of the right image. The planar area aligning section sets the planar area detected by the planar area detecting section as a given region of interest.
    Type: Grant
    Filed: April 26, 2011
    Date of Patent: March 18, 2014
    Assignee: Hitachi, Ltd.
    Inventors: Morihiko Sakano, Mirai Higuchi, Takeshi Shima, Shoji Muramatsu
  • Patent number: 8675917
    Abstract: Methods and apparatus are provided for improved abandoned object recognition using pedestrian detection. An abandoned object is detected in one or more images by determining if one or more detected objects in a foreground of the images comprises a potential abandoned object; applying a trained pedestrian detector to the potential abandoned object to determine if the potential abandoned object comprises at least a portion of a pedestrian; and classifying the potential abandoned object as an abandoned object based on whether the potential abandoned object is not at least a portion of a pedestrian. The trained pedestrian detector is trained using positive training samples comprised of at least portions of human bodies in one or more poses and/or negative training samples comprised of at least portions of abandoned objects.
    Type: Grant
    Filed: October 31, 2011
    Date of Patent: March 18, 2014
    Assignee: International Business Machines Corporation
    Inventors: Lisa M. Brown, Rogerio S. Feris, Frederik C. Kjeldsen, Kristina Scherbaum
  • Patent number: 8670590
    Abstract: An image processing device for improving the accuracy of optical flow calculation when an optical flow is calculated in a window unit. An image processing device for calculating an optical flow on the basis of image information within a window for a processing target using a plurality of images captured at different times includes position acquisition means which acquires position information of the processing target and setting means which sets a size of a window for calculating an optical flow on the basis of the position information acquired by the position acquisition means.
    Type: Grant
    Filed: July 30, 2009
    Date of Patent: March 11, 2014
    Assignee: Toyota Jidosha Kabushiki Kaisha
    Inventor: Naohide Uchida
  • Patent number: 8666117
    Abstract: A method for determining a parking violation includes receiving video data as a sequence of frames provided by a camera. The method includes defining a location of an exclusion zone in the video data. The method includes detecting a vehicle located in the defined exclusion zone. The detecting includes determining a background in an initial frame of the video data and determining a background in a select frame by applying a predetermined updating process. The detecting includes subtracting the background of the select frame from the initial frame to obtain an image difference. The detecting includes classifying the pixels in the image difference as foreground or background pixels and classifying the pixels in the foreground image as vehicle or non-vehicle pixels. The method includes determining a duration that the detected vehicle is in the exclusion zone based on a number of the sequence of frames including the detected vehicle.
    Type: Grant
    Filed: April 6, 2012
    Date of Patent: March 4, 2014
    Assignee: Xerox Corporation
    Inventors: Orhan Bulan, Yao Rong Wang, Robert P. Loce, Edgar A. Bernal, Zhigang Fan, Graham S. Pennington, David P. Cummins
  • Patent number: 8660303
    Abstract: A system and method for detecting and tracking targets including body parts and props is described. In one aspect, the disclosed technology acquires one or more depth images, generates one or more classification maps associated with one or more body parts and one or more props, tracks the one or more body parts using a skeletal tracking system, tracks the one or more props using a prop tracking system, and reports metrics regarding the one or more body parts and the one or more props. In some embodiments, feedback may occur between the skeletal tracking system and the prop tracking system.
    Type: Grant
    Filed: December 20, 2010
    Date of Patent: February 25, 2014
    Assignee: Microsoft Corporation
    Inventors: Shahram Izadi, Jamie Shotton, John Winn, Antonio Criminisi, Otmar Hilliges, Mat Cook, David Molyneaux
  • Patent number: 8660310
    Abstract: An image such as a depth image of a scene may be received, observed, or captured by a device and a model of a user in the depth image may be generated. The background of a received depth image may be removed to isolate a human target in the received depth image. A model may then be adjusted to fit within the isolated human target in the received depth image. To adjust the model, a joint or a bone may be magnetized to the closest pixel of the isolated human target. The joint or the bone may then be refined such that the joint or the bone may be further adjusted to a pixel equidistant between two edges the body part of the isolated human target where the joint or bone may have been magnetized.
    Type: Grant
    Filed: December 13, 2012
    Date of Patent: February 25, 2014
    Assignee: Microsoft Corporation
    Inventor: Zsolt Mathe
  • Patent number: 8655022
    Abstract: The position detection system includes a pressure sensor that detects the vertical position of an underwater vehicle, a range sensor unit that detects the relative distances of the underwater vehicle from its surrounding structures; a measurement image acquisition unit that acquires a measurement image of the horizontal plane, an image storage unit that stores images, an image selector that selects one of the stored images that corresponds to the horizontal plane in which the relative distances have been detected, a corresponding-area identification unit that identifies the area in the selected image that corresponds to the measurement image by performing map matching, and a horizontal position calculator that identifies, the pixel that corresponds to the position at which the relative distances have been detected and calculates the horizontal position of the underwater vehicle.
    Type: Grant
    Filed: February 17, 2010
    Date of Patent: February 18, 2014
    Assignee: Hitachi-GE Nuclear Energy, Ltd.
    Inventors: Ryosuke Kobayashi, Satoshi Okada, Masahiro Tooma, Yutaka Kometani, Yosuke Takatori, Mitsuru Odakura, Kojirou Kodaira
  • Patent number: 8643741
    Abstract: Devices, methods, and computer readable media for performing image orientation detection using image processing techniques are described. In one implementation, an image processing method is disclosed that obtains image data from a first image captured by an image sensor (e.g., from any image capture electronic device). Positional sensor data captured by the device and corresponding to the image data may also be acquired (e.g., through an accelerometer). If the orientation of the device is not reliably discernible from the positional sensor data, the method may attempt to use rotationally invariant character detection metrics to determine the most likely orientation of the image, e.g., by using a decision forest algorithm. Face detection information may be used in conjunction with, or as a substitute for, the character detection data based on one or more priority parameters. Image orientation information may then be included within the image's metadata.
    Type: Grant
    Filed: January 17, 2012
    Date of Patent: February 4, 2014
    Assignee: Apple Inc.
    Inventor: Ralph Brunner
  • Patent number: 8644560
    Abstract: An image processing apparatus includes a depth image obtaining unit configured to obtain a depth image including information on distances from an image-capturing position to a subject in a two-dimensional image to be captured; a local tip portion detection unit configured to detect a portion of the subject at a depth and a position close from the image-capturing position as a local tip portion; a projecting portion detection unit configured to detect, in a case where, when each of the blocks is set as a block of interest, the local tip portion of the block of interest in an area formed of the plurality of blocks adjacent to the block of interest, becomes a local tip portion closest from the image-capturing position, the local tip portion as a projecting portion; and a tracking unit configured to continuously track the position of the projecting portion.
    Type: Grant
    Filed: October 31, 2011
    Date of Patent: February 4, 2014
    Assignee: Sony Corporation
    Inventor: Yoshihiro Myokan
  • Patent number: 8634592
    Abstract: A system for predicting object location includes a video capture system for capturing a plurality of video frames, each of the video frames having a first area, an object isolation element for locating an object in each of the plurality of video frames, the object being located at a first actual position in a first video frame and being located at a second actual position in a second video frame, and a trajectory calculation element configured to analyze the first actual position and the second actual position to determine an object trajectory, the object trajectory comprising past trajectory and predicted future trajectory, wherein the predicted future trajectory is used to determine a second area in a subsequent video frame in which to search for the object, wherein the second area is different in size than the first area.
    Type: Grant
    Filed: August 23, 2012
    Date of Patent: January 21, 2014
    Assignee: Disney Enterprises, Inc.
    Inventors: David L. Casamona, Christopher C. Pond, Anthony J. Bailey
  • Patent number: 8625933
    Abstract: An image processing apparatus can detect a predetermined target object from image data. The image processing apparatus includes an image zooming unit configured to generate a plurality of pieces of zoomed image data that are mutually different in magnification from the image data input by an image inputting unit, a detection unit configured to extract a partial area from the plurality of pieces of zoomed image data generated by the image zooming unit, and detect the predetermined target object by performing collation to determine whether the extracted partial area coincides with a detection pattern stored in a detected pattern storage unit, and a detected information storage unit configured to store detection information including magnification information of the zoomed image data from which the predetermined target object is detected by the detection unit.
    Type: Grant
    Filed: October 1, 2009
    Date of Patent: January 7, 2014
    Assignee: Canon Kabushiki Kaisha
    Inventor: Yoshinobu Nagamasa
  • Patent number: 8625934
    Abstract: A method of detecting displacement with sub-pixel accuracy includes the steps of: capturing a first array image and a second array image; interpolating the first array image to form a reference image; interpolating the second array image to form a comparison image; comparing the reference image with the comparison image so as to obtain a displacement. The present invention also provides an apparatus for detecting displacement with sub-pixel accuracy.
    Type: Grant
    Filed: December 20, 2011
    Date of Patent: January 7, 2014
    Assignee: Pixart Imaging Inc.
    Inventors: Hsin Chia Chen, Ming Tsan Kao
  • Patent number: 8624932
    Abstract: A method of using stereo vision to interface with a computer is provided. The method includes capturing a stereo image, and processing the stereo image to determine position information of an object in the stereo image. The object is controlled by a user. The method also includes communicating the position information to the computer to allow the user to interact with a computer application.
    Type: Grant
    Filed: August 23, 2012
    Date of Patent: January 7, 2014
    Assignee: Qualcomm Incorporated
    Inventors: Evan Hildreth, Francis MacDougall
  • Publication number: 20140003740
    Abstract: Disclosed is a two-dimensional pattern comprising a plurality of R-planes each comprising a tiling of a corresponding R-ary block, being a block of radix R integer values, where for each dimension of the pattern, the least common multiple of the sizes of the tiled blocks in that dimension is greater than the size of the tiling that dimension, and any sub-block of a size less than the tiled blocks occurs on a regular grid with the same periodicity as the tiled block for that R-plane. The pattern may be used in determining a position of a location captured in an image by projecting the pattern onto a scene. An image is captured. The method determines from the captured image a sub-block associated with the location and constructs, a unique integer value for each R-plane. The unique integer values from each R-plane are used to determine the location in the image.
    Type: Application
    Filed: December 13, 2011
    Publication date: January 2, 2014
    Applicant: CANON KABUSHIKI KAISHA
    Inventor: Donald James Bone
  • Patent number: 8620027
    Abstract: An augmented reality-based file transfer method and a related file transfer system integrated with cloud computing are provided. The file transfer method is applied to file transmission between a first device and a second device wirelessly connected to each other, wherein the first device includes a file, a display unit, and an input unit electronically connected to the display unit. The file transfer method includes the following steps: when an image stored in the first device is opened, displaying the file and the image on the display unit of the first device, wherein the image comprises a face image of the second user; when the file is dragged to the face image of the second user shown in the image via the input unit and is then released, generating a command; and transferring the file from the first device to the second device according to the command.
    Type: Grant
    Filed: February 1, 2012
    Date of Patent: December 31, 2013
    Assignee: Acer Incorporated
    Inventors: Yu-Chee Tseng, Lien-Wu Chen, Yu-Hao Peng
  • Patent number: 8620029
    Abstract: Systems and methods for identifying, tracking, and using objects in a video or similar electronic content, including methods for tracking one or more moving objects in a video. This can involve tracking one or more feature points within a video scene and separating those feature points into multiple layers based on motion paths. Each such motion layer can be further divided into different clusters, for example, based on distances between points. These clusters can then be used as an estimate to define the boundaries of the objects in video. Objects can also be compared with one another in cases in which identified objects should be combined and considered a single object. For example, if two objects in the first two frames have significantly overlapping areas, they may be considered the same object. Objects in each frame can further be compared to determine the life of the objects across the frames.
    Type: Grant
    Filed: July 23, 2012
    Date of Patent: December 31, 2013
    Assignee: Adobe Systems Incorporated
    Inventors: Anuj Dhawan, Abhinav Darbari, Ramesh P. B.
  • Patent number: 8620025
    Abstract: In order to perform vehicle control, a warning process, and the like which do not give a driver an uncomfortable feeling in speed adjustment, a warning process, and the like corresponding to a road shape such as a curve, it is necessary to recognize not only a near road shape but also a far road shape with high accuracy.
    Type: Grant
    Filed: June 24, 2009
    Date of Patent: December 31, 2013
    Assignee: Hitachi, Ltd.
    Inventors: Mirai Higuchi, Ryo Ota, Takuya Naka, Morihiko Sakano, Shoji Muramatsu, Tatsuhiko Monji
  • Patent number: 8620079
    Abstract: Various embodiments of the invention provide systems and methods for extracting information from digital documents, including physical documents that have been converted to digital documents. For example, some embodiments are configured to extract information from a field in a digital document by identifying a block of tokens before (i.e., a prior block) and a block of tokens after (i.e., a post block) the field from which the information is to be extracted, where both the prior block and post block are known to be associated with the field type of the field (e.g., name, address, phone number, etc.).
    Type: Grant
    Filed: May 10, 2011
    Date of Patent: December 31, 2013
    Assignee: First American Data Tree LLC
    Inventors: Christopher Lawrence Rubio, Vladimir Sevastyanov
  • Patent number: 8611702
    Abstract: An information interchange unit, a storage unit, and a display controller are configured such that, after a image selection unit selects a first image and a second image, the information interchange unit interchanges, automatically, first image information of the first image with second image information of the second image, or interchanges, automatically, first position information of the first image with second position information of the second image, the storage unit stores and correlates the first image information and the second position information, and stores and correlates the second image information and the first position information, and the display controller controls, automatically, a display to display the one image based on the first image information and the second position information, and the another image based on the second image information and the first position information.
    Type: Grant
    Filed: October 26, 2012
    Date of Patent: December 17, 2013
    Assignee: Brother Kogyo Kabushiki Kaisha
    Inventors: Takahiko Watari, Tatsuya Sato
  • Patent number: 8611596
    Abstract: A display device and a control method thereof are provided. The display device includes a camera obtaining an image and a controller obtaining the direction of a user included in the obtained image and correcting the image such that the direction of the user is synchronized with the photographing direction of the camera. Even when the direction of the user does not correspond to the photographing direction of the camera, an image of the user can be corrected to correctly recognize a user's gesture.
    Type: Grant
    Filed: January 3, 2011
    Date of Patent: December 17, 2013
    Assignee: LG Electronics Inc.
    Inventors: Sungun Kim, Soungmin Im
  • Patent number: 8611673
    Abstract: A web-based application provides more accurate and clearer methods of searching, sorting, and displaying a set of images stored in a database. A first aspect of the present invention is the method by which image data is stored. Typical content-based systems use color information, whereas the present invention uses an image-location tagging method. A second aspect of the present invention is the method by which the set of images are sorted in relevancy. Tag data of the images allows for a new and fast method of searching through an entire set. A third aspect of the present invention is the method by which the sorted images are displayed to the user. Instead of the common method of just displaying the images in a rectangular array, where each image is the same size, the web-based application positions and sizes each image based on how relevant it is.
    Type: Grant
    Filed: September 14, 2006
    Date of Patent: December 17, 2013
    Inventors: Parham Aarabi, Ron Appel
  • Patent number: 8606044
    Abstract: An image processing apparatus includes a geometric position obtaining unit, an image retrieving unit, and an image rectifying unit. The geometric position obtaining unit receives a geometric transformation parameter, a block size, and a tile size, obtains a plurality of base-point coordinates of the geometric transformation parameter according to the block size and the tile size, and builds a base-point coordinate table according to the base-point coordinates. The image retrieving unit reads the base-point coordinate table, scans the base-point coordinate table according a fixed block size to generate a plurality of reference image ranges, and respectively retrieves a plurality of partial image data of a to-be-processed image data according to each of the reference image ranges. The image rectifying unit rectifies each of the partial image data according to the geometric transformation parameter.
    Type: Grant
    Filed: May 22, 2011
    Date of Patent: December 10, 2013
    Assignee: Altek Corporation
    Inventors: Chih-Feng Liu, Po-Jung Lin, Da-Ming Chang
  • Patent number: 8600116
    Abstract: Multiple-object speed tracking apparatuses are disclosed, including a camera configured to capture a set of images of a monitored area (e.g., a roadway). The camera's longitudinal axis may be positioned at any viewing angle relative to a longitudinal axis of a roadway such that at least two moving objects moving on the roadway are included in a set of high or low resolution images. A computer system is configured to analyze the set of images to detect the two moving objects and substantially simultaneously determine a calculated rate of speed of at least one of the two moving objects. The computer system also provides an on-site speed calibration process for transforming locations of an image among the set of images into real-world coordinates by considering both perspective and scale of the image. An apparatus mount for at least one of either the camera or the computer system is also disclosed.
    Type: Grant
    Filed: May 22, 2012
    Date of Patent: December 3, 2013
    Assignee: American Traffic Solutions, Inc.
    Inventor: Jigang Wang
  • Patent number: 8600107
    Abstract: A method of determining locations of at least two pointers in a captured image frame comprises generating a vertical intensity profile (VIP) from the captured image frame, the VIP comprising peaks generally corresponding to the at least two pointers; determining if the peaks are closely spaced and, if the peaks are closely spaced, fitting a curve to the VIP; analyzing the fitted curve to determine peak locations of the fitted curve; and registering the peak locations as the pointer locations.
    Type: Grant
    Filed: March 31, 2011
    Date of Patent: December 3, 2013
    Assignee: SMART Technologies ULC
    Inventor: David Holmgren
  • Patent number: 8593523
    Abstract: A method and an apparatus for capturing facial expressions are provided, in which different facial expressions of a user are captured through a face recognition technique. In the method, a plurality of sequentially captured images containing human faces is received. Regional features of the human faces in the images are respectively captured to generate a target feature vector. The target feature vector is compared with a plurality of previously stored feature vectors to generate a parameter value. When the parameter value is higher than a threshold, one of the images is selected as a target image. Moreover, a facial expression recognition and classification procedures can be further performed. For example, the target image is recognized to obtain a facial expression state, and the image is classified according to the facial expression state.
    Type: Grant
    Filed: March 24, 2011
    Date of Patent: November 26, 2013
    Assignee: Industrial Technology Research Institute
    Inventors: Shian Wan, Yuan-Shi Liao, Yea-Shuan Huang, Shun-Hsu Chuang
  • Patent number: 8588471
    Abstract: A mapping method is provided. The environment is scanned to obtain depth information of environmental obstacles. The image of the environment is captured to generate an image plane. The depth information of environmental obstacles is projected onto the image plane, so as to obtain projection positions. At least one feature vector is calculated from a predetermined range around each projection position. The environmental obstacle depth information and the environmental feature vector are merged to generate a sub-map at a certain time point. Sub-maps at all time points are combined to generate a map. In addition, a localization method using the map is also provided.
    Type: Grant
    Filed: February 4, 2010
    Date of Patent: November 19, 2013
    Assignee: Industrial Technology Research Institute
    Inventors: Hsiang-Wen Hsieh, Hung-Hsiu Yu, Yu-Kuen Tsai, Wei-Han Wang, Chin-Chia Wu
  • Patent number: 8577439
    Abstract: A system for positioning electrodes on a patient body includes an image capturing system, a memory device, a processing system and an indicator system. The image capturing system generates an actual image of the patient body. The memory device stores a reference image, the reference image including a reference body and a reference position on the reference body. The processing system compares the actual image and the reference image and for determining an electrode position on the patient body by matching the reference position on the reference body. The indicator system indicates the electrode position on the patient body.
    Type: Grant
    Filed: September 3, 2007
    Date of Patent: November 5, 2013
    Assignee: Koninklijke Philips N.V.
    Inventors: Robert Pinter, Jens Muehlsteff, Guido Josef Muesch
  • Patent number: 8577175
    Abstract: A subject position determination method includes: generating a plurality of binarized images of a target image based upon color information or brightness information of the target image; calculating an evaluation value used to determine a subject position in the target image for each of the plurality of binarized images; and determining a subject position in the target image based upon the evaluation value.
    Type: Grant
    Filed: July 7, 2010
    Date of Patent: November 5, 2013
    Assignee: Nikon Corporation
    Inventor: Hiroyuki Abe
  • Patent number: 8577176
    Abstract: A position and orientation measuring apparatus calculates a difference between an image feature of a two-dimensional image of an object and a projected image of a three-dimensional model in a stored position and orientation of the object projected on the two-dimensional image. The position and orientation measuring apparatus further calculates a difference between three-dimensional coordinate information and a three-dimensional model in the stored position and orientation of the object. The position and orientation measuring apparatus then converts a dimension of the first difference and/or the second difference to cause the first difference and the second difference to have an equivalent dimension and corrects the stored position and orientation.
    Type: Grant
    Filed: July 6, 2010
    Date of Patent: November 5, 2013
    Assignee: Canon Kabushiki Kaisha
    Inventors: Daisuke Kotake, Shinji Uchiyama
  • Patent number: 8577081
    Abstract: Mobile video-based therapy, using a portable therapy device that includes a camera, a therapy application database, a processor, and a display. The camera is configured to generate images of a user, and the therapy application database is configured to store therapy applications. The processor is configured to select, from the therapy application database, a therapy application appropriate for assisting in physical or cognitive rehabilitation or therapy of the user, to invoke the therapy application, to recognize a gesture of the user from the generated images, and to control the invoked therapy application based on the recognized gesture. The display is configured to display an output of the controlled therapy application.
    Type: Grant
    Filed: December 9, 2011
    Date of Patent: November 5, 2013
    Assignee: Qualcomm Incorporated
    Inventors: Ronald L. Kelusky, Scott Robinson
  • Patent number: 8565483
    Abstract: A sensor unit is installed to a target object and detects a given physical amount. A data acquisition unit acquires output data of the sensor unit in a period including a first period for which a real value of a value of m time integrals of the physical amount is known and a second period that is a target for motion analysis. An error time function estimating unit performs m time integrals of the output data of the sensor unit and estimates a time function of an error of a value of the physical amount detected by the sensor unit with respect to the real value of the value of the physical amount detected by the sensor unit based on a difference between a value of m time integrals of the output data and the real value for the first period.
    Type: Grant
    Filed: October 31, 2011
    Date of Patent: October 22, 2013
    Assignee: Seiko Epson Corporation
    Inventor: Yasushi Nakaoka
  • Patent number: 8565486
    Abstract: A classification system and method are provided, wherein the classification system includes a memory device, a processor communicatively connected to the memory device, and an input communicatively connected to the processor, wherein the input is configured to receive data comprising at least one object that is to be classified as one of an object of interest (OOI) and a nuisance of interest (NOI) based upon at least one non-Boolean attribute of the object, wherein the processor is configured as a Bayesian classifier to classify the object based upon the non-Boolean attribute using a non-linear probability function.
    Type: Grant
    Filed: January 5, 2012
    Date of Patent: October 22, 2013
    Assignee: Gentex Corporation
    Inventors: David J. Wright, Gregory S. Bush, David M. Falb
  • Patent number: 8564826
    Abstract: To shift an image in order to prevent the image from overlapping with a finishing position, the amount of shift for preventing the overlap may be increased and a desired result of layout may not be obtained. In addition, if the image is not shifted in order to obtain the desired result of layout, the image may overlap with the finishing position and toner or the like may come off. When it is determined that a position where the finishing process is to be executed overlaps with a content data placement area, an avoidance area where printing is not performed is placed at a position in which the position where the finishing process is to be executed overlaps with the content data placement area without changing the position and size of the content data placement area.
    Type: Grant
    Filed: August 4, 2010
    Date of Patent: October 22, 2013
    Assignee: Canon Kabushiki Kaisha
    Inventor: Hidekazu Morooka
  • Patent number: 8559725
    Abstract: A disclosed method for extracting a raster image of a page from a portable electronic document that includes (a) acquiring commands and resources of the raster image of the page by analyzing a format of the portable electronic document, (b) extracting first and second candidate raster images by processing the commands and the resources of the raster image of the page, (c) integrating the first and second candidate raster images as an integrated candidate raster image provided that the first and second candidate raster images are linked together, and (d) removing a pseudo-raster image from the integrated candidate raster image.
    Type: Grant
    Filed: May 21, 2010
    Date of Patent: October 15, 2013
    Assignee: Ricoh Company, Ltd.
    Inventors: Cheng Du, Wenhui Xu, Fumihiro Hasegawa, Koichi Inoue
  • Patent number: 8559798
    Abstract: A rendering process for rendering an image frame and a postprocess for adapting the image frame to a display are separated. A rendering processing unit 42 generates an image frame sequence by performing rendering at a predetermined frame rate regardless of a condition that the image frame should meet for output to the display. A postprocessing unit 50 subjects the image frame sequence generated by the rendering processing unit to a merge process so as to generate and output an updated image frame sequence that meets the condition. Since the rendering process and the postprocess are separated, the image frame sequence can be generated regardless of the specification of the display such as resolution and frame rate of the display.
    Type: Grant
    Filed: May 19, 2005
    Date of Patent: October 15, 2013
    Assignees: Sony Corporation, Sony Computer Entertainment Inc.
    Inventors: Sachiyo Aoki, Akio Ohba, Masaaki Oka, Nobuo Sasaki
  • Patent number: 8553939
    Abstract: A method of tracking a target includes receiving from a source a depth image of a scene including the human subject. The depth image includes a depth for each of a plurality of pixels. The method further includes identifying pixels of the depth image that belong to the human subject and deriving from the identified pixels of the depth image one or more machine readable data structures representing the human subject as a body model including a plurality of shapes.
    Type: Grant
    Filed: February 29, 2012
    Date of Patent: October 8, 2013
    Assignee: Microsoft Corporation
    Inventors: Robert Matthew Craig, Tommer Leyvand, Craig Peeper, Momin M. Al-Ghosien, Matt Bronder, Oliver Williams, Ryan M. Geiss, Jamie Daniel Joseph Shotton, Johnny Lee, Mark Finocchio
  • Patent number: 8553936
    Abstract: Many athletic endeavors require focus on a moving object and subsequent coordination of bodily movement either in response to movement of the object or in an attempt to manipulate the object. Coordinated movement of the head and eyes can be improved through training when errors in gaze/head movement-coordination are identified. There exists a need for systems and methods capable of tracking and coordinating change in position of, for example, the head relative to a participant's gaze, and then to provide feedback on a participant's error in tracking moving objects. Exemplary embodiments of the present invention relate to the technology of gaze tracking and more particularly to the application of gaze tracking technology to aid in the development of head-and-eye-movement coordination.
    Type: Grant
    Filed: March 3, 2010
    Date of Patent: October 8, 2013
    Assignee: The Ohio State University
    Inventor: Nicklaus F. Fogt
  • Patent number: 8548199
    Abstract: There is provided an image processing device including: a data storage unit that stores object identification data for identifying an object operable by a user and feature data indicating a feature of appearance of each object; an environment map storage unit that stores an environment map representing a position of one or more objects existing in a real space and generated based on an input image obtained by imaging the real space using an imaging device and the feature data stored in the data storage unit; and a selecting unit that selects at least one object recognized as being operable based on the object identification data, out of the objects included in the environment map stored in the environment map storage unit, as a candidate object being a possible operation target by a user.
    Type: Grant
    Filed: November 7, 2012
    Date of Patent: October 1, 2013
    Assignee: Sony Corporation
    Inventors: Masaki Fukuchi, Kouichi Matsuda, Yasuhiro Suto, Kenichiro Ol, Jingjing Guo
  • Patent number: 8542943
    Abstract: A method of improving the lighting conditions of a real scene or video sequence. Digitally generated light is added to a scene for video conferencing over telecommunication networks. A virtual illumination equation takes into account light attenuation, lambertian and specular reflection. An image of an object is captured, a virtual light source illuminates the object within the image. In addition, the object can be the head of the user. The position of the head of the user is dynamically tracked so that an three-dimensional model is generated which is representative of the head of the user. Synthetic light is applied to a position on the model to form an illuminated model.
    Type: Grant
    Filed: November 23, 2011
    Date of Patent: September 24, 2013
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Andrea Basso, Eric Cosatto, David Crawford Gibbon, Hans Peter Graf, Shan Liu
  • Patent number: 8542909
    Abstract: There is provided a method of measuring 3D depth of a stereoscopic image, comprising providing left and right eye input images, applying an edge extraction filter to each of the left and right eye input images, and determining 3D depth of the stereoscopic image using the edge extracted left and right eye images. There is also provided an apparatus for carrying out the method of measuring 3D depth of a stereoscopic image.
    Type: Grant
    Filed: April 6, 2011
    Date of Patent: September 24, 2013
    Assignee: Tektronix, Inc.
    Inventors: Antoni Caceres, Martin Norman
  • Patent number: 8542906
    Abstract: A method is provided to implement augmented reality using markers. An image is captured of an environment. An image of a marker is detected in the image of the environment. A virtual image is displayed overlaid on the image of the environment at an offset from the marker, wherein the virtual image is based on the marker.
    Type: Grant
    Filed: May 21, 2008
    Date of Patent: September 24, 2013
    Assignee: Sprint Communications Company L.P.
    Inventors: Carl J. Persson, Thomas H. Wilson
  • Patent number: 8537409
    Abstract: An automated system and a method for extracting a region of interest from a digital image are disclosed. The method includes identifying a subset of training images from a larger set of training images, each training image in the set having a respective identified region of interest. The subset is identified based on a measure of similarity between the digital image and the images in the set of training images. At least one region of interest is extracted from the digital image based on an analysis of the identified regions of interest in the subset of training images.
    Type: Grant
    Filed: October 13, 2008
    Date of Patent: September 17, 2013
    Assignee: Xerox Corporation
    Inventors: Luca Marchesotti, Claudio Cifarelli
  • Patent number: 8538082
    Abstract: The present invention provides a system and method for detecting and tracking a moving object. First, robust change detection is applied to find initial candidate regions in consecutive frames. These initial detections in consecutive frames are stacked to produce space-time bands which are extracted by Hough transform and entropy minimization based band detection algorithm.
    Type: Grant
    Filed: April 19, 2012
    Date of Patent: September 17, 2013
    Assignee: SRI International
    Inventors: Tao Zhao, Manoj Aggarwal, Changjiang Yang
  • Patent number: 8515119
    Abstract: The present invention discloses a control unit, a video device employing said control unit and a control method thereof, wherein the control unit includes: a recognition device which is used to recognize the target in the target image; a calculation device which is used to calculate and transform the position coordinate of the recognized imaging target from the recognition device and output the position of the operation unit.
    Type: Grant
    Filed: February 19, 2008
    Date of Patent: August 20, 2013
    Assignees: Hisense Beijing Electric Co., Ltd., Hisense Group Co., Ltd., Hisense Electric Co., Ltd.
    Inventors: Liang Ma, Weidong Liu
  • Patent number: 8509485
    Abstract: In an information processing apparatus, an external-information acquisition unit acquires external information such as an image, a sound, textual information, and numerical information from an input apparatus. A field-image generation unit generates, as an image, a “field” that acts on a particle for a predetermined time step based on the external information. An intermediate-image memory unit stores an intermediate image that is generated in the process of generating a field image by the field-image generation unit. A field-image memory unit stores the field image generated by the field-image generation unit. A particle-image generation unit generates data of a particle image to be output finally by using the field image stored in the field-image memory unit.
    Type: Grant
    Filed: June 7, 2011
    Date of Patent: August 13, 2013
    Assignees: Sony Corporation, Sony Computer Entertainment Inc.
    Inventors: Noriaki Shinoyama, Takashi Nozaki
  • Patent number: 8508710
    Abstract: In accordance with one embodiment of the present disclosure, a difference is detected between a first image and a second image. The second image can include at least a portion of the first image reflected from a display panel and light from an object passing through the display panel.
    Type: Grant
    Filed: December 2, 2004
    Date of Patent: August 13, 2013
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Wyatt A. Huddleston, Michael M. Blythe, Gregory W. Blythe
  • Patent number: 8503727
    Abstract: A monitoring camera terminal has an imaging portion for imaging a monitoring target area allocated to an own-terminal, an object extraction portion for processing a frame image imaged by the imaging portion to extract an imaged object, an ID addition portion for adding an ID to the object extracted by the object extraction portion, an object map creation portion for creating, for each object extracted by the object extraction portion, an object map associating the ID added to the object with a coordinate position in the frame image, and a tracing portion for tracing an object in the monitoring target area allocated to the own-terminal using the object maps created by the object map creation portion.
    Type: Grant
    Filed: April 5, 2010
    Date of Patent: August 6, 2013
    Assignees: OMRON Corporation, The University of Tokyo
    Inventors: Takeshi Naito, Shunsuke Kamijo, Kaichi Fujimura