Multiple Cameras On Baseline (e.g., Range Finder, Etc.) Patents (Class 348/139)
-
Patent number: 8970708Abstract: An alignment suite includes first and second targeting devices and an optical coupler. The first targeting device is configured to perform a positional determination regarding a downrange target. The first targeting device includes an image processor. The second targeting device is configured to perform a targeting function relative to the downrange target and is affixable to the first targeting device. The optical coupler enables the image processor to capture an image of a reference object at the second targeting device responsive to the first and second targeting devices being affixed together. The image processor employs processing circuitry that determines pose information indicative of an alignment relationship between the first and second targeting devices relative to the downrange target based on the image captured.Type: GrantFiled: May 11, 2012Date of Patent: March 3, 2015Assignee: The Johns Hopkins UniversityInventors: Scott B. Goldblatt, Ryan P. DiNello-Fass, Jeffery W. Warren, Steven J. Conard
-
Patent number: 8964027Abstract: The present disclosure provides a global calibration method based on a rigid bar for a multi-sensor vision measurement system, comprising: step 1, executing the following procedure for at least nine times: placing, in front of two vision sensors to be calibrated, a rigid bar fasten with two targets respectively corresponding to the vision sensors; capturing images of the respective targets by their corresponding vision sensors; extracting coordinates of feature points of the respective targets in their corresponding images; and computing 3D coordinates of each feature points of the respective targets under their corresponding vision sensor coordinate frames; and Step 2, computing the transformation matrix between the two vision sensors, with the constraint of the fixed position relationship between the two targets. The present disclosure also provides a global calibration apparatus based on a rigid bar for a multi-sensor vision measurement system.Type: GrantFiled: August 9, 2011Date of Patent: February 24, 2015Assignee: Beihang UniversityInventors: Guangjun Zhang, Zhen Liu, Zhenzhong Wei, Junhua Sun, Meng Xie
-
Publication number: 20150049188Abstract: A portable vehicle alignment system is provided having two base tower assemblies, each having a pedestal, a columnar tower removably attachable to the top of the pedestal, and a camera pod movable along a length of the tower; and a data processor with a wireless communication device for processing image data from the camera pods. Each camera pod includes a camera for capturing image data of a target mounted on a vehicle, and a communication device for wirelessly communicating with the data processor. One pod has a calibration target and the other pod has a calibration camera for capturing images of the calibration target. The pedestals each have a manually-operated clamp for removably fixedly attaching the tower to the pedestal in one of a plurality of positions such that the orientation of the camera pod to the pedestal is angularly adjustable, allowing horizontal rotation of the camera pod.Type: ApplicationFiled: August 19, 2014Publication date: February 19, 2015Inventors: Rodney HARRELL, Brian K. GRAY, David A. JACKSON, Ronald D. SWAYNE, Darwin Y. CHEN, Bryan C. MINOR
-
Patent number: 8953039Abstract: An auto-commissioning system provides automatic parameter selection for an intelligent video system based on target video provided by the intelligent video system. The auto-commissioning system extracts visual feature descriptors from the target video and provides the one or more visual feature descriptors associated with the received target video to an parameter database that is comprised of a plurality of entries, each entry including a set of one or more stored visual feature descriptors and associated parameters tailored for the set of stored visual feature descriptors. A search of the parameter database locates one or more best matches between the extracted visual feature descriptors and the stored visual feature descriptors. The parameters associated with the best matches are returned as part of the search and used to commission the intelligent video system.Type: GrantFiled: July 1, 2011Date of Patent: February 10, 2015Assignee: UTC Fire & Security CorporationInventors: Zhen Jia, Jianwei Zhao, Penghe Geng, Ziyou Xiong, Jie Xi, Zhengwei Jiang, Alan Matthew Finn
-
Patent number: 8953036Abstract: Provided is an information processing apparatus including an image acquisition unit for acquiring a real space image including an image of another apparatus, a coordinate system generation unit for generating a spatial coordinate system of the real space image acquired by the image acquisition unit, and a transmission unit for transmitting spatial information constituting the spatial coordinate system generated by the coordinate system generation unit to the other apparatus sharing the spatial coordinate system.Type: GrantFiled: July 3, 2013Date of Patent: February 10, 2015Assignee: Sony CorporationInventors: Akira Miyashita, Kazuhiro Suzuki, Hiroyuki Ishige
-
Patent number: 8947262Abstract: A system and method to capture a plurality of images and store the captured images. The system has multiple camera systems capable of transmitting image data. At least one camera system is equipped with an apparatus for determining location coordinates such as GPS. A computer system monitors location coordinates, retrieves image data from the camera systems, and stores the image data into a file. A contiguous array of location coordinates is entered and the computer system locates camera systems within the contiguous array of location coordinates; retrieves image data from the located camera systems; and files the image data taken from each of the camera systems to obtain a file of image data. The system provides the ability to serially interleave frames or video captured from multiple sources.Type: GrantFiled: June 17, 2014Date of Patent: February 3, 2015Assignee: AT&T Intellectual Property II, L.P.Inventors: Frank Rauscher, Carl E. Werner
-
Patent number: 8896689Abstract: An apparatus capable of improving the estimation accuracy of information on a subject including a distance up to the subject is provided. According to an environment recognition apparatus 1 of the present invention, a first cost function is defined as a decreasing function of an object point distance Z. Thus, the longer the object point distance Z is, the lower the first cost of a pixel concerned is evaluated. This reduces the contribution of the first cost of a pixel highly probable to have a large measurement or estimation error of the object point distance Z to the total cost C. Thereby, the estimation accuracy of a plane parameter ^q representing the surface position and posture of the subject is improved.Type: GrantFiled: December 4, 2012Date of Patent: November 25, 2014Assignees: Honda Motor Co., Ltd., Tokyo Institute of TechnologyInventors: Minami Asatani, Masatoshi Okutomi, Shigeki Sugimoto
-
Patent number: 8866901Abstract: A motion calculation device includes an image-capturing unit configured to capture an image of a range including a plane and outputs the captured image, an extraction unit configured to extract a region of the plane from the image, a detection unit configured to detect feature points and motion vectors of the feature points from a plurality of images captured by the image-capturing unit at a predetermined time interval; and a calculation unit configured to calculate the motion of the host device based on both of an epipolar constraint relating to the feature points and a homography relating to the region.Type: GrantFiled: January 14, 2011Date of Patent: October 21, 2014Assignee: Honda Elesys Co., Ltd.Inventors: Takahiro Azuma, Masatoshi Okutomi, Shigeki Sugimoto
-
Patent number: 8866943Abstract: A digital camera system including a first video capture unit for capturing a first digital video sequence of a scene and a second video capture unit that simultaneously captures a second digital video sequence that includes the photographer. A data processor automatically analyzes first digital video sequence to determine a low-interest spatial image region. A facial video sequence including the photographer's face is extracted from the second digital video sequence, and inserted into the low-interest spatial image region in the first digital video sequence to form the composite video sequence.Type: GrantFiled: March 9, 2012Date of Patent: October 21, 2014Assignee: Apple Inc.Inventors: Minwoo Park, Amit Singhal
-
Patent number: 8854457Abstract: An autonomous computer based method and system is described for personalized production of videos such as team sport videos such as basketball videos from multi-sensored data under limited display resolution. Embodiments of the present invention relate to the selection of a view to display from among the multiple video streams captured by the camera network. Technical solutions are provided to provide perceptual comfort as well as an efficient integration of contextual information, which is implemented, for example, by smoothing generated viewpoint/camera sequences to alleviate flickering visual artifacts and discontinuous story-telling artifacts. A design and implementation of the viewpoint selection process is disclosed that has been verified by experiments, which shows that the method and system of the present invention efficiently distribute the processing load across cameras, and effectively selects viewpoints that cover the team action at hand while avoiding major perceptual artifacts.Type: GrantFiled: May 7, 2010Date of Patent: October 7, 2014Assignee: Universite Catholique de LouvainInventors: Christophe De Vleeschouwer, Fan Chen
-
Publication number: 20140267703Abstract: A method and apparatus for determining the location and orientation of landmarks in a coordinate space by identifying the landmarks, determining the location, size, and orientation of landmarks within the field of view of one or more cameras. The one or more camera's three-dimensional coordinate location(s) and orientation(s) are measured for each camera frame, and the landmark location(s) and orientation(s) are transformed into actual coordinates of the coordinate space. An identity, location, and orientation of each landmark is determined for each camera frame, and the multiple data values are stored in a database in a computer memory. Final landmark pose data are resolved by mathematically reducing the multiple location and orientation values for each landmark to single values. Landmark pose data are made available to position determination systems, navigation systems, or item tracking systems.Type: ApplicationFiled: March 15, 2013Publication date: September 18, 2014Inventors: ROBERT M. TAYLOR, ROBERT S. KUNZIG, LEONARD J. MAXWELL
-
Patent number: 8836781Abstract: Technology for system and method of providing surrounding information of a vehicle which accurately calculates positions of obstacles around the vehicle is provided. The system includes a plurality of image acquisition units installed in the vehicle at a preset interval, an image acquisition unit selector which selects at least two image acquisition units of the plurality of image acquisition units and receives image data from the selected acquisition unit selector, and a control unit which recognizes an obstacle from the image data received from the image acquisition units, calculates a position of the obstacle, and controls the image acquisition unit selector to select the at least two image acquisition units of the plurality of image acquisition units according to information for vehicle speed of the vehicle.Type: GrantFiled: December 12, 2011Date of Patent: September 16, 2014Assignee: Hyundai Motor CompanyInventors: Jae Pil Hwang, Sung Bo Sim, Eui Yoon Chung
-
Publication number: 20140247345Abstract: The present invention relates to a system and method for photographing a moving subject by means of multiple cameras, and acquiring the actual movement trajectory of the subject on the basis of the photographed image. One embodiment of the present invention provides a method for acquiring the movement trajectory of a subject, the method comprising: a step for photographing a moving subject by means of multiple cameras; a step for collecting, from each of the cameras, information on multiple images of the subject and the positions of the images on the relevant camera image frames; and a step for acquiring the movement trajectory of the subject on the basis of the information collected.Type: ApplicationFiled: September 24, 2012Publication date: September 4, 2014Applicant: Creatz Inc.Inventor: Yong Ho Suk
-
Publication number: 20140247344Abstract: A corresponding point search device includes an acquiring unit, a search unit and a determination unit. The acquiring unit acquires a first image obtained by imaging a subject with a first imaging apparatus in a focused state in which the first imaging apparatus is focused on the subject by moving of a movable portion, a second image obtained by imaging the subject with a second imaging apparatus and position information of the movable portion when the first imaging apparatus is in the focused state. The search unit searches for a corresponding point corresponding to a baseline point in one image in the other image for the one image and the other image of the first and second images. The determination unit determines the search range in which the corresponding point is searched by the search unit in the other image based on the position information.Type: ApplicationFiled: May 10, 2012Publication date: September 4, 2014Applicant: KONICA MINOLTA, INC.Inventor: Koji Fujiwara
-
Patent number: 8817099Abstract: Embodiments described herein comprise a system and method for improving visibility of a roadway using an improved visibility system. The method comprising receiving data from a plurality of fog detectors located proximate a roadway and determining, based on the data from the plurality of fog detectors, that fog is present about the roadway. The method further comprising obtaining, after the determining that fog is present about the roadway, a plurality of images of the roadway by activating a plurality of cameras located proximate the roadway. The method further comprising creating a composite image by combining two or more of the plurality of images, wherein the composite image depicts the roadway unobstructed by fog and transmitting the composite image to a display device located in a vehicle traveling along the roadway.Type: GrantFiled: January 28, 2013Date of Patent: August 26, 2014Assignee: International Business Machines CorporationInventor: Giuseppe Longobardi
-
Patent number: 8810628Abstract: An image processing apparatus includes a receiving unit configured to receive an encoded stream, an image capture type, and image capturing order information, the encoded stream being produced by encoding image data of multi-viewpoint images including images from multiple viewpoints that form a stereoscopic image, the image capture type indicating that the multi-viewpoint images have been captured at different timings, the image capturing order information indicating an image capturing order in which the multi-viewpoint images have been captured; a decoding unit configured to decode the encoded stream received by the receiving unit to generate image data; and a control unit configured to control a display apparatus to display multi-viewpoint images corresponding to the image data generated by the decoding unit in the same order as the image capturing order in accordance with the image capture type and image capturing order information received by the receiving unit.Type: GrantFiled: September 27, 2010Date of Patent: August 19, 2014Assignee: Sony CorporationInventors: Teruhiko Suzuki, Yoshitomo Takahashi, Takuya Kitamura
-
Patent number: 8786700Abstract: A position and orientation measurement apparatus comprises: a distance information obtaining unit adapted to obtain distance information of a target object captured by a capturing unit; a grayscale image obtaining unit adapted to obtain a grayscale image of the target object; a first position and orientation estimation unit adapted to estimate a position and orientation of the target object based on the information of a three-dimensional shape model and the distance information; a second position and orientation estimation unit adapted to estimate a position and orientation of the target object based on a geometric feature of the grayscale image and projection information obtained by projecting, on the grayscale image, the information of the three-dimensional shape model; and a determination unit adapted to determine whether a parameter of the capturing unit is needed to be calibrated or not, based on both a first and second estimated values.Type: GrantFiled: August 1, 2011Date of Patent: July 22, 2014Assignee: Canon Kabushiki KaishaInventors: Kazuhiko Kobayashi, Shinji Uchiyama
-
Patent number: 8780197Abstract: A face detection apparatus and method are provided. The apparatus may acquire a distance difference image through a stereo camera and create an object mask using the distance difference image to detect a face candidate area. The apparatus may also determine a size of a search window using the distance difference image and detect a facial area in the face candidate area. Accordingly, an operation speed for face detection can be improved.Type: GrantFiled: August 3, 2010Date of Patent: July 15, 2014Assignee: Samsung Electronics Co., Ltd.Inventors: Dong-ryeol Park, Yeon-ho Kim
-
Patent number: 8773529Abstract: A projector (1) for imaging an original image on a projection area (5) includes projection optics (2) having an adjustable image width (L) for imaging the original image on the projection area (5). A camera device (6) is used for imaging the projection area (5) and generating a projection area image. A comparison device (12) compares the original image with the projection area image and adjusts the image width (L) of the projection optics (2) as a function of the comparison such that the comparison device (12) is configured to determine the position of the original image in the projection area image. The position of the original image in the projection area image is then used as a measure for determining the image width of the projection optics (2).Type: GrantFiled: June 4, 2010Date of Patent: July 8, 2014Assignee: Sypro Optics GmbHInventor: Axel Kieβhauer
-
Patent number: 8760519Abstract: A method is provided for detecting a threat in a distributed multi-camera surveillance system. The method includes: monitoring movement of an object in a field of view of a first camera using software installed at the first camera; detecting a suspicious object at the first camera when movement of the object does not conform with a motion flow model residing at the first camera; sending a tracking request from the first camera to a second camera upon detecting the suspicious object at the first camera; monitoring movement of the object in a field of view of the second camera using software installed at the second camera; assigning threat scores at the second camera when the movement of the object does not conform with a motion flow model residing at the second camera; and generating an alarm based in part on the threat scores detected at the first camera and the second camera.Type: GrantFiled: February 16, 2007Date of Patent: June 24, 2014Assignee: Panasonic CorporationInventors: Hasan Timucin Ozdemir, Kuo Chu Lee
-
Patent number: 8754785Abstract: A system and method to capture a plurality of images and store the captured images. The system has multiple camera systems capable of transmitting image data. At least one camera system is equipped with an apparatus for determining location coordinates such as GPS. A computer system monitors location coordinates, retrieves image data from the camera systems, and stores the image data into a file. A contiguous array of location coordinates is entered and the computer system locates camera systems within the contiguous array of location coordinates; retrieves image data from the located camera systems; and files the image data taken from each of the camera systems to obtain a file of image data. The system provides the ability to serially interleave frames or video captured from multiple sources.Type: GrantFiled: December 8, 2010Date of Patent: June 17, 2014Assignee: AT&T Intellectual Property II, L.P.Inventors: Frank Rauscher, Carl E. Werner
-
Patent number: 8723950Abstract: An apparatus for evaluating the fit of a modular window assembly into a simulated vehicle body opening includes a base member, a vehicle body opening/sheet metal simulator mounted to the base member, one or more light sources disposed in the vehicle body opening/sheet metal simulator and one or more devices for securing the vehicle window to the vehicle body opening/sheet metal simulator. A method of utilizing the apparatus is also a part of the invention.Type: GrantFiled: June 3, 2011Date of Patent: May 13, 2014Assignee: Pilkington Group LimitedInventor: Brian Hertel
-
Patent number: 8717422Abstract: Times of receipt of start-of-frame indications associated with frames received from multiple image sensors at a video controller are monitored. Time differences between the times of receipt of the frames are calculated. One or more frame period determining parameter values associated with the image sensors are altered if the time differences equal or exceed frame synchronization hysteresis threshold values. Parameter values are adjusted positively and/or negatively to decrease the time differences. The parameter values may be reset at each image sensor when the time differences become less than the frame synchronization hysteresis threshold value as additional frames are received at the video controller.Type: GrantFiled: December 22, 2010Date of Patent: May 6, 2014Assignee: Texas Instruments IncorporatedInventors: Gregory Robert Hewes, Fred William Ware, Jr.
-
Patent number: 8675047Abstract: A detection device of a planar area is provided. The detection device includes an image obtaining section for obtaining a left image and a right image; a planar area aligning section for setting given regions of interest to the obtained left image and the right image, and through use of a geometric transform function that matches the region of interest of the right image with the region of interest of the left image, performing geometric transform to generate a geometric transform image; and a planar area detecting section for detecting a planar area based on the geometric transform image and the region of interest of the right image. The planar area aligning section sets the planar area detected by the planar area detecting section as a given region of interest.Type: GrantFiled: April 26, 2011Date of Patent: March 18, 2014Assignee: Hitachi, Ltd.Inventors: Morihiko Sakano, Mirai Higuchi, Takeshi Shima, Shoji Muramatsu
-
Patent number: 8675072Abstract: A multi-view video camera system (25) for filming a windsurfing sailor during sailing, comprising of two video cameras in waterproof compartments (7) with lenses (8) and screw caps (29) attached to a clamp (10) with tightening screw (9) and wing nut (23) for attachment to the mast (11) of a windsurfing sail rig (22). The compartments (7) are oriented to aim the contained video cameras (24) along each side of the windsurfing sail rig (22) at corresponding locations a windsurfing sailor would occupy while sailing the windsurfing sail rig. Video streams, resulting from the contained video cameras could then be combined into single composite video for analysis.Type: GrantFiled: September 7, 2010Date of Patent: March 18, 2014Inventor: Sergey G Menshikov
-
Patent number: 8665333Abstract: The present invention is a method and system for optimizing the observation and annotation of complex human behavior from video sources by automatically detecting predefined events based on the behavior of people in a first video stream from a first means for capturing images in a physical space, accessing a synchronized second video stream from a second means for capturing images that are positioned to observe the people more closely using the timestamps associated with the detected events from the first video stream, and enabling an annotator to annotate each of the events with more labels using a tool. The present invention captures a plurality of input images of the persons by a plurality of means for capturing images and processes the plurality of input images in order to detect the predefined events based on the behavior in an exemplary embodiment. The processes are based on a novel usage of a plurality of computer vision technologies to analyze the human behavior from the plurality of input images.Type: GrantFiled: January 25, 2008Date of Patent: March 4, 2014Assignee: VideoMining CorporationInventors: Rajeev Sharma, Satish Mummareddy, Emilio Schapira, Namsoon Jung
-
Patent number: 8654196Abstract: There is provided an image pickup apparatus for use in measuring a distance from an observer to a target. The image pickup apparatus includes a case; and a plurality of cameras configured to capture images of the target is fixed in the case. In the image pickup apparatus, the distance from the observer to the target is measured based on the images of the target captured by altering baseline lengths for parallax computation obtained by combining any two of the cameras.Type: GrantFiled: February 11, 2011Date of Patent: February 18, 2014Assignee: Ricoh Company, Ltd.Inventor: Soichiro Yokota
-
Patent number: 8654178Abstract: A video recording apparatus is configured to record, onto a recording medium, a layered data structure containing data sets. Each of the data sets includes a plurality of channels of data elements. Each of the data elements includes video data and/or the like. An input unit is configured to receive a plurality of channels of video data. The plurality of channels correspond to each other. An information generation unit is configured to generate additional information to be added to each of the data elements in at least two layers of the data structure. An adding unit is configured to add the additional information to each of the data elements so that the data elements included in each of the data sets are associated with each other. A recording unit is configured to record the data structure in which the additional information has been added to each of the data elements onto the recording medium.Type: GrantFiled: February 1, 2011Date of Patent: February 18, 2014Assignee: Panasonic CorporationInventor: Hisataka Ueda
-
Publication number: 20140028838Abstract: An example embodiment includes an apparatus for monitoring launch parameters of an object. The apparatus includes a transmitter optical subassembly (TOSA), a receiver optical subassembly (ROSA), a processing unit, and a camera. The TOSA includes at least one laser source configured to transmit a laser sheet along an expected flight path of an object. The ROSA is configured to receive light reflected from the object. The processing unit is configured to estimate a velocity of the object based at least partially on the received light. The camera is configured to capture one or more images of the object at a time in which the object passes through a field of view of the camera according to the estimated velocity.Type: ApplicationFiled: October 3, 2013Publication date: January 30, 2014Applicant: Rapsodo Pte. Ltd.Inventors: Batuhan Okur, Lodiya Radhakrishnan Viyayanand, Kelvin Yeo Soon Keat, Nyan Myo Naing
-
Publication number: 20130342686Abstract: A method for calibrating a measuring system for measuring a vehicle, including a measuring plane for accommodating a vehicle to be measured and two measuring sensors, each of the measuring sensors having at least two camera systems, includes: positioning at least four measuring panels on the measuring plane; orienting the measuring sensors such that at least one measuring panel is in view of each camera system, and each measuring panel is in view of at least one camera system; carrying out a first measuring step of recording images of the measuring panels; interchanging the two measuring sensors; carrying out a second measuring step of recording images of the measuring panels; determining and comparing positions of the measuring panels recorded in the first and second measuring steps; and calculating at least one correction value from the difference between the positions recorded in the first and second measuring steps.Type: ApplicationFiled: July 21, 2011Publication date: December 26, 2013Inventors: Christian Wagmann, Volker Uffenkamp
-
Patent number: 8531519Abstract: Implementations relate to a computer-implemented method and a device for determining a relative posed between devices. The method can include receiving data representing first keypoint features from a first image captured by a camera of a second mobile computing device; capturing, by a camera of a first mobile computing device, a second image, wherein the first image and the second image comprise a substantially common scene having an area of overlap; computing, by the first mobile computing device, data representing second keypoint features from the second image; determining, by the first mobile computing device, based at least in part on the data representing first keypoint features and the data representing second keypoint features, a relative pose of the first mobile computing device and the second mobile computing device; and communicating the relative pose to the second mobile computing device.Type: GrantFiled: September 6, 2012Date of Patent: September 10, 2013Assignee: Google Inc.Inventors: Yifan Peng, Wei Hua, Hrishikesh Aradhye, Rodrigo Carceroni
-
Patent number: 8525879Abstract: A depth detection method includes the following steps. First, first and second video data are shot. Next, the first and second video data are compared to obtain initial similarity data including r×c×d initial similarity elements, wherein r, c and d are natural numbers greater than 1. Then, an accumulation operation is performed, with each similarity element serving as a center, according to a reference mask to obtain an iteration parameter. Next, n times of iteration update operations are performed on the initial similarity data according to the iteration parameter to generate updated similarity data. Then, it is judged whether the updated similarity data satisfy a character verification condition. If yes, the updated similarity data is converted into depth distribution data.Type: GrantFiled: July 23, 2010Date of Patent: September 3, 2013Assignee: Industrial Technology Research InstituteInventors: Chih-Pin Liao, Yao-Yang Tsai, Jay Huang, Ko-Shyang Wang
-
Patent number: 8520067Abstract: A method for calibrating a measuring system uses at least one camera for determining the position of an object in a reference three-dimensional coordinate system. The external and internal parameters of the camera are calibrated in various steps and the position of the camera is determined with the aid of external measuring means in accordance with three steps. In the first step, the internal camera parameters are ascertained and fixedly assigned to the internal camera. In a second step, the position of the internal camera in the measuring system is determined. In a third step, the orientation of the internal camera is ascertained in the reference three-dimensional coordinate system by evaluating camera images.Type: GrantFiled: February 8, 2006Date of Patent: August 27, 2013Assignee: Isra Vision AGInventor: Enis Ersue
-
Publication number: 20130182103Abstract: There is disclosed herein a system for automatic configuration of cameras in a Building Information Model (BIM) comprising a programmed BIM processing system. A plurality of input ports are distributed at select locations in the building. A database stores a network location map identifying locations of the input ports. The BIM processing system is operated to detect a camera connected to one of the input ports and reading camera image data; determine possible building areas in the camera's field of view based on location of the one of the input ports and extracting features from the stored building models for the determined possible building areas, and establishing mapping between the camera image data and the extracted features to determine actual location of the camera in the building and store camera location data in the database.Type: ApplicationFiled: January 13, 2012Publication date: July 18, 2013Inventors: Mi Suen LEE, Paul Popowski
-
Patent number: 8471722Abstract: A direction indicator system includes: an electromagnetic drive actuator that has a moving part that can slide back and forth, side to side, and diagonally; and a drive controlling unit that controls the sliding direction of the moving part, based on direction indicating information that is supplied from the outside.Type: GrantFiled: January 25, 2008Date of Patent: June 25, 2013Assignee: Fujitsu Component LimitedInventors: Masahiro Kaneko, Satoshi Sakurai, Nobuo Yatsu, Takuya Uchiyama, Yuriko Segawa
-
Patent number: 8442383Abstract: An image capturing apparatus selects one of image capturing conditions to be used for capturing images as a reference condition when a total of image capturing time of one frame in each image capturing condition to be used for capturing images is longer than one frame period at a predetermined frame rate, and captures images at the predetermined frame rate under the reference condition, and captures images at a frame rate lower than the predetermined frame rate under the other image capturing conditions. A playback apparatus detects a motion between frames of a moving image captured under the reference condition when the image capturing condition of the playback moving image is not the reference condition, and generates an interpolation frame for interpolating between frames of the playback moving image based on the detected motion.Type: GrantFiled: August 10, 2011Date of Patent: May 14, 2013Assignee: Canon Kabushiki KaishaInventor: Yoshihisa Furumoto
-
Patent number: 8432443Abstract: There is provided a method of localizing an object comprising projecting an object located on an object plane and a reference point corresponding thereto on a virtual viewable plane and an actual camera plane; estimating coordinates of the reference point; and prescribing a relationship between a location of the object and the coordinates of the reference point.Type: GrantFiled: April 3, 2008Date of Patent: April 30, 2013Assignee: Ajou University Industry Cooperation FoundationInventors: Yun Young Nam, Kyoung Su Park, Jin Seok Lee, Sang Jin Hong, We Duke Cho
-
Patent number: 8427632Abstract: A method of image-based positioning is provided. The method comprises: (A) providing an image-capturing device integrated with a focused-radiation source and a processor; the image-capturing device further comprises an image sensor; (B) capturing an image of an object located in a field of view (FOV) of the image-capturing device by using the image sensor; (C) directing a focused ray of radiation generated by the focused-radiation source to the object located in the (FOV) of the image-capturing device; (D) detecting at least one return signal generated by reflection of the focused ray of radiation from the object located in FOV of the image-capturing device by using the image sensor; (E) characterizing the object located in a field of view (FOV) of the image-capturing device by using each return signal; and (F) processing each return signal in order to determine a distance from the object located in the FOV to the image-capturing device.Type: GrantFiled: December 23, 2009Date of Patent: April 23, 2013Assignee: Trimble Navigation Ltd.Inventors: Phillip T. Nash, Gregory C. Best
-
Patent number: 8421873Abstract: A system composed of a housing and an arm coupled to the housing. The arm supports a first lamp, a second lamp, and an optical sensor.Type: GrantFiled: July 30, 2010Date of Patent: April 16, 2013Assignee: Hewlett-Packard Development Company, L.P.Inventors: Peter Majewicz, Kurt E. Spears, Jennifer L. Melin
-
Patent number: 8421863Abstract: Provided is an in-vehicle image display device capable of providing, from among images of the peripheral area of a vehicle that can change in accordance with the driving state, an image of a part needed by the driver at an appropriate timing so that the driver can recognize the positional relationship between the vehicle and the peripheral area of the vehicle. Images captured with in-vehicle cameras are acquired, and a vehicle periphery image is generated from such images. Then, a collision-warned part of the vehicle that has a possibility of hitting a nearby object is selected based on the vehicle driving state, and the acquired image is processed to generate an enlarged image of the peripheral area of the collision-warned part of the vehicle selected by the collision-warned part selection part. Then, a composite display image, in which the positions of the enlarged image and the vehicle periphery image are displayed in a correlated manner, is generated and displayed.Type: GrantFiled: July 27, 2010Date of Patent: April 16, 2013Assignee: Hitachi, Ltd.Inventors: Ryo Yumiba, Masahiro Kiyohara, Tatsuhiko Monji, Kota Irie
-
Publication number: 20130070048Abstract: The present disclosure uses at least three cameras to monitor even a large-scale area. Displacement and strain are measured in a fast, convenient and effective way. The present disclosure has advantages on whole field, far distance and convenience.Type: ApplicationFiled: September 21, 2011Publication date: March 21, 2013Applicant: NATIONAL APPLIED RESEARCH LABORATORIESInventors: Chi-Hung Huang, Yung-Hsiang Chen, Wei-Chung Wang, Tai-Shan Liao
-
Patent number: 8395663Abstract: A positioning system and a method thereof are provided. In the positioning method, a first and a second pose information of a moving device are obtained by a first positioning device and a second positioning device respectively, wherein the first pose information corresponds to the second pose information. In addition, a plurality of first candidacy pose information is generated in an error range of the first pose information. Furthermore, a plurality of second candidacy pose information is generated according to the first pose information respectively. One of the second candidacy pose information having a smallest error derived from the second pose information is selected for updating the pose information of the first positioning device and parameter information of the second positioning device. Thereby, pose information of the moving device is updated and parameter information of the second orientation devices is calibrated simultaneously.Type: GrantFiled: February 17, 2009Date of Patent: March 12, 2013Assignee: Industrial Technology Research InstituteInventors: Hsiang-Wen Hsieh, Jwu-Sheng Hu, Shyh-Haur Su, Chin-Chia Wu
-
Publication number: 20130057682Abstract: A precision motion platform carrying an imaging device under a large-field-coverage lens enables capture of high resolution imagery over the full field in an instantaneous telephoto mode and wide-angle coverage through temporal integration. The device permits automated tracking and scanning without movement of a camera body or lens. Coupled use of two or more devices enables automated range computation without the need for subsequent epipolar rectification. The imager motion enables sample integration for resolution enhancement. The control methods for imager positioning enable decreasing the blur caused by both the motion of the moving imager or the motion of an object's image that the imager is intended to capture.Type: ApplicationFiled: September 14, 2012Publication date: March 7, 2013Applicant: Interval Licensing LLCInventors: Henry H. Baker, John I. Woodfill, Pierre St. Hilaire, Nicholas R. Kalayjian
-
Publication number: 20130050476Abstract: A structured-light measuring method, includes: matching process, in which the number and the low-precision depth of a laser point are achieved by using the imaging position of the laser point on a first camera (21) according to a first corresponding relationship in a calibration database, and the imaging position of the laser point on a second camera (22) is searched according to the number and the low-precision depth of the laser point so as to acquire the candidate matching points, then the matching process is completed according to the imaging position of the first camera (21) and the candidate matching points of the imaging position of the first camera (21) on the second camera (22) so that a matching result is achieved; and computing process, in which the imaging position of the second camera (22) matching with the imaging position of the first camera (21) is achieved according to the matching result, and then the precision position of the laser point is determined by a second corresponding relationshipType: ApplicationFiled: May 7, 2010Publication date: February 28, 2013Applicant: Shenzhen Taishan Online Technology, Co., Ltd.Inventors: Danwei Shi, Di Wu, Wenchuang Zhao, Qi Xie
-
Publication number: 20130038723Abstract: An image acquisition apparatus according to the present invention includes a first image capturing unit group including a plurality of image capturing units in which at least part of in-focus distance ranges overlap each other and a second image capturing unit group including a plurality of image capturing units in which at least part of in-focus distance ranges overlap each other. The second image capturing unit group is different from the first image capturing unit group. A first object distance to an object is obtained from image data acquired by the first image capturing unit group and a second object distance different from the first object distance is obtained from image data acquired by the second image capturing unit group.Type: ApplicationFiled: August 6, 2012Publication date: February 14, 2013Applicant: CANON KABUSHIKI KAISHAInventor: Shohei Tsutsumi
-
Publication number: 20130038722Abstract: An image processing apparatus and method includes a light source that beams light toward a subject, a first camera that is spaced apart from the light source by more than a predetermined distance and senses light reflected from the subject, and a calculation unit that generates depth information based on reflected light sensed by the first camera, and corrects distortion of the depth information based on at least one of an angle of view of the first camera, a distance between the light source and the first camera, and a distance between the light source and the subject. When the camera generating the depth information and the light source are spaced apart from each other by a predetermined distance, distorted information caused by a distance difference between the light source and the camera thereby is corrected.Type: ApplicationFiled: November 14, 2011Publication date: February 14, 2013Inventors: Joo Young HA, Hae Jin Jeon, In Taek Song
-
Patent number: 8368759Abstract: There are provided a landmark for recognizing a position of a mobile robot moving in an indoor space and an apparatus and method for recognizing the position of the mobile robot by using the landmark. The landmark includes a position recognition part formed of a mark in any position and at least two marks on an X axis and Y axis centered on the mark and further includes an area recognition part formed of a combination of a plurality of marks to distinguish an individual landmark from others. The apparatus may obtain an image of the landmark by an infrared camera, detect the marks forming the landmark, and detect precise position and area information of the mobile robot from the marks.Type: GrantFiled: March 13, 2007Date of Patent: February 5, 2013Assignee: Research Institute of Industrial Science & TechnologyInventors: Ki-Sung Yoo, Chin-Tae Choi, Hee-Don Jeong
-
Publication number: 20130021471Abstract: The subject matter of this specification can be embodied in, among other things, a system that includes a reflective surface configured to reflect light to a target plane and three or more view ports that are optically connected to at least one camera, the view ports arranged in the target plane. A computing system is coupled to the camera and configured to receive image information captured by the view ports. The computing system, based on the image information and a relationship between intensity of light reflected by the reflective surface as captured by a particular view port and a distance of the particular view port to a point on the target plane that is a function of the light reflected from the reflective surface incident on or passing through the target plane, estimates a location on the target plane of the point.Type: ApplicationFiled: July 21, 2011Publication date: January 24, 2013Applicant: GOOGLE INC.Inventors: Tamsyn Peronel Waterhouse, Ross Koningstein
-
Publication number: 20130010081Abstract: Systems and methods are disclosed that determine a mapping between a first camera system's coordinate system and a second camera system's coordinate system; or determine a transformation between a robot's coordinate system and a camera system's coordinate system, and/or locate, in a robot's coordinate system, a tool extending from an arm of the robot based on the tool location in the camera's coordinate system. The disclosed systems and methods may use transformations derived from coordinates of features found in one or more images. The transformations may be used to interrelate various coordinate systems, facilitating calibration of camera systems, including in robotic systems, such as an image-guided robotic systems for hair harvesting and/or implantation.Type: ApplicationFiled: July 8, 2011Publication date: January 10, 2013Inventors: John A. Tenney, Erik R. Burd, Hui Zhang, Robert F. Biro
-
Patent number: 8345096Abstract: The disclosed subject matter relates to a sensor and apparatus for vehicle height and slant measurement which can include a light source, and two cameras with respective lenses. The light source can be configured to emit light towards a road, and both cameras can be configured to receive the image of the road that is illuminated by the light from the light source and to thereby create image data. The apparatus can include a control circuit that can geometrically measure a vehicle height in accordance with the image data. The sensor can also receive image data from different points and from two light sources, and the apparatus can detect a vehicular lean using the different vehicle heights. Thus, because the sensors of the disclosed subject matter do not necessarily include a moving part as in the conventional sensor, the sensors can be easily attached to a vehicle body and can also be used for a vehicular lean detection.Type: GrantFiled: May 7, 2009Date of Patent: January 1, 2013Assignee: Stanley Electric Co., Ltd.Inventors: Yutaka Ishiyama, Takuya Kushimoto