Placing Generated Data In Real Scene Patents (Class 345/632)
-
Publication number: 20100171758Abstract: Embodiments consistent with the present disclosure provide method and systems for providing customized augmented reality data comprising. The method includes Some embodiments consistent with the present disclosure provide a method for providing customized augmented reality data. The method includes receiving geo-registered sensor data including data captured by a sensor and metadata describing a position of the sensor at the time the data was captured and receiving geospatial overlay data including computer-generated objects having a predefined geospatial position. The method also includes receiving a selection designating at least one portion of the geo-registered sensor data, said at least one portion of the geo-registered sensor data including some or all of the geo-registered sensor data, and receiving a selection designating at least one portion of the geospatial overlay data, said at least one portion of the geospatial overlay data including some or all of the geospatial overlay data.Type: ApplicationFiled: March 12, 2010Publication date: July 8, 2010Inventors: Paul W. Maassel, Justin Thomas
-
Patent number: 7750901Abstract: A telestrator system is disclosed that allows a broadcaster to annotate video during or after an event. For example, while televising a sporting event, an announcer (or other user) can use the present invention to draw over the video of the event to highlight one or more actions, features, etc. In one embodiment, when the announcer draws over the video, it appears that the announcer is drawing on the field or location of the event. Such an appearance can be performed by mapping the pixels location from the user's drawing to three dimensional locations at the event. Other embodiments include drawing on the video without obscuring persons and/or other specified objects, and/or smoothing the drawings in real time.Type: GrantFiled: January 23, 2009Date of Patent: July 6, 2010Assignee: Sportvision, Inc.Inventors: Kevin R Meier, Walter Hsiao, James R Gloudemans, Marvin S White, Richard H Cavallaro, Stanley K Honey
-
Patent number: 7747074Abstract: A system and method for adding decorative images to a plurality of input images allows the decorative images to be easily selected. An image processing device includes a decorative image storage unit configured to store a plurality of decorative images, and a representative color acquisition unit configured to acquire a representative color for each of the input images by analyzing the input images. A decorative image selecting unit selects a decorative image to be added to each of the input images based on the representative color of the input image. An output image generating unit generates an output image by individually synthesizing the decorative images with a same pattern and the input images.Type: GrantFiled: October 1, 2004Date of Patent: June 29, 2010Assignee: Seiko Epson CorporationInventors: Hitoshi Yamakado, Yu Gu
-
Publication number: 20100142776Abstract: A method for displaying colonography images includes presenting a series of oblique images of the colon at sequential locations along the colon centerline. Each image is generally centered on the centerline, presents a field of view generally perpendicular to the centerline, and is oriented with the bottom of the colon down.Type: ApplicationFiled: January 22, 2008Publication date: June 10, 2010Applicant: MAYO FOUNDATION FOR MEDICAL EDUCATION AND RESEARCHInventors: C. Daniel Johnson, Michael J. Carston, Robert J. Wentz, Armando Manduca
-
Publication number: 20100134516Abstract: Video signals which represent a scene as viewed by a camera are processed to combine a computer generated object with the video signals with the effect that the computer generated object appears within the scene when the video signals are displayed. The scene includes a first object. The process includes mapping a virtual model of the first object to a position of the first object within the scene so that the virtual model substantially corresponds with the real object. The virtual model has a degree of transparency such that the virtual model can be rendered as a substantially transparent virtual object within the scene. The process further includes detecting occluded regions of the virtual model. The occluded regions correspond to regions of the virtual model which are hidden from a virtual light source by the computer generated object.Type: ApplicationFiled: September 29, 2009Publication date: June 3, 2010Applicant: Sony CorporationInventor: Richard Jared COOPER
-
Patent number: 7728852Abstract: A virtual object and stylus model as images to be composited to a real space image are rendered by changing background transparencies according to a mutual positional relationship. When the virtual object image is composited to the real space image, the image of the stylus included in the real space image is observed while reflecting the positional relationship with the virtual object. In this way, in an MR image, the depth ordering between real and virtual objects can be correctly and easily expressed.Type: GrantFiled: March 24, 2005Date of Patent: June 1, 2010Assignee: Canon Kabushiki KaishaInventors: Masahiro Suzuki, Kenji Morita
-
Publication number: 20100091096Abstract: An index extraction unit detects indices from a sensed image sensed by a sensing unit which senses an image of a physical space on which a plurality of indices is laid out. A convergence arithmetic unit calculates position and orientation information of the sensing unit based on the detected indices. A CG rendering unit generates a virtual space image based on the position and orientation information. A sensed image clipping unit extracts, as a display image, an image in a display target region from the sensed image. An image composition unit generates a composite image by compositing the extracted display image and the generated virtual space image. A display unit displays the composite image.Type: ApplicationFiled: September 30, 2009Publication date: April 15, 2010Applicant: CANON KABUSHIKI KAISHAInventors: Makoto Oikawa, Takuya Tsujimoto
-
Publication number: 20100088014Abstract: A computer process and tool (I_Sys), or information system, are described which permit electronically archiving information related to archaeological discoveries, in order to allow interested parties to easily consult such information and to allow safer, efficient and systematic preservation of such information. In particular, an efficient archaeological information system is described for the analysis, reconstruction, archiving and knowledge of landscapes, structures, and objects which are representations of antiquity.Type: ApplicationFiled: October 1, 2007Publication date: April 8, 2010Inventors: Andrea Carandini, Paolo Carafa
-
Patent number: 7690975Abstract: According to a function 188 which stores in the first memory for difference, pickup image data from CCD camera 42 based on a predetermined timing, a function 190 which stores in the second memory for difference, pickup image data from CCD camera 42 based on another timing, a function 192 which obtains a difference between the pickup image data stored in the first memory for difference 24 and the pickup image data stored in the second memory for difference 26, a function 194 for specifying an image having moved based on the data as a difference, a function 196 which determines whether or not the image having moved is touching the character image, and a function 200 which increases a value of parameters such as experiential value, physical energy, and offensive power, when it is determined that the image having moved comes into contact with the character image, it is possible to expand a range of card game used to be played only in a real space up to a virtual space, and offer a new game which merges the card gType: GrantFiled: October 4, 2005Date of Patent: April 6, 2010Assignee: Sony Computer Entertainment Inc.Inventors: Yusuke Watanabe, Satoru Miyaki, Tukasa Yoshimura
-
Patent number: 7692647Abstract: Real-time rendering of realistic rain is described. In one aspect, image samples of real rain and associated information are automatically modeled in real-time to generate synthetic rain particles in view of respective scene radiances of target video content frames. The synthetic rain particles are rendered in real-time using pre-computed radiance transfer with uniform random distribution across respective frames of the target video content.Type: GrantFiled: September 14, 2006Date of Patent: April 6, 2010Assignee: Microsoft CorporationInventors: Zhouchen Lin, Lifeng Wang, Tian Fang, Xu Yang, Xuan Yu, Jian Wang, Xiaoou Tang
-
Publication number: 20100073396Abstract: A method for producing a photo album includes sorting images according to a primary predetermined criterion, separating the sorted images into a first page group and a second page group using one or more secondary criteria, and automatically selecting a first page layout from a library of page layouts. The first page layout includes a same number of one or more image receiving areas as the number of one or more images in the first page group. The one or more images in the first page group are automatically placed into the one or more image receiving areas in the first page layout.Type: ApplicationFiled: March 18, 2009Publication date: March 25, 2010Inventors: Wiley H. Wang, Russ Ennio Muzzolini, Jennifer Marie Dean, Eugene Chen, Trynne Anne Miller, Su Mien Quek
-
Patent number: 7671875Abstract: In a case where a position and/or orientation of a shooting viewpoint is calculated by using information about image coordinates of markers placed in a scene, the present invention enables a user to easily determine positions of the markers so that the position and orientation can be calculated more accurately. Information about markers placed in a physical space is obtained and area information about mixing accuracy between a physical space image and a virtual space image is obtained based on the information about the markers, so that a virtual space image is generated in accordance with the obtained area information.Type: GrantFiled: September 26, 2005Date of Patent: March 2, 2010Assignee: Canon Kabushiki KaishaInventors: Mahoro Anabuki, Daisuke Kotake, Shinji Uchiyama
-
Patent number: 7668403Abstract: A method for producing images is provided. The method involves acquiring images, acquiring data corresponding to the location of the acquired images, and transferring the images and data to a frame grabber. The method also involves combining the images and data within the frame grabber to provide a plurality of imagery products.Type: GrantFiled: June 28, 2005Date of Patent: February 23, 2010Assignee: Lockheed Martin CorporationInventor: Scott M. Kanowitz
-
Publication number: 20100026901Abstract: A computer-implemented method of defining and presenting a presentation including a plurality of scenes and a presenter appearing in the scenes. The method includes the steps of defining a first scene including a geographic background, receiving geographic location information and remote video and/or data information from a remote site, and defining within the first scene a launch area at a location within the geographic background corresponding to the geographic location information. The method further includes associating a destination scene with the defined launch area, obtaining a video image of the presenter, combining the video image of the presenter with the first scene such that the presenter appears in the first scene, and tracking a location of a pointing element controlled by the presenter in the first scene.Type: ApplicationFiled: February 26, 2009Publication date: February 4, 2010Inventors: John S. Moore, Victor W. Marsh, Benjamin T. Zimmerman
-
Publication number: 20100017722Abstract: Methods of interacting with a mixed reality are presented. A mobile device captures an image of a real-world object where the image has content information that can be used to control a mixed reality object through an offered command set. The mixed reality object can be real, virtual, or a mixture of both real and virtual.Type: ApplicationFiled: July 20, 2009Publication date: January 21, 2010Inventor: Ronald Cohen
-
Publication number: 20090322788Abstract: An imaging apparatus comprises position information obtaining means, decoration image selection means, and composite image generation means. The position information obtaining means obtains position information indicative of a position where the imaging apparatus is present. The decoration image selection means selects a predetermined decoration image from predetermined storage means based on the position information. The composite image generation means composites the predetermined decoration image selected by the decoration image selection means and a taken image to generate a composite image.Type: ApplicationFiled: September 15, 2008Publication date: December 31, 2009Inventor: Takao Sawano
-
Publication number: 20090315915Abstract: A device and method of background substitution are disclosed. One or more cameras in a mobile device obtain a depth image. A processor in or external to the device segments the foreground from the background of the image. The original background is removed and a stored background image or video is substituted in place of the original background. The substituted background is altered dependent on the attitude and motion of the device, which is sensed by one or more sensors in the device. A portion of the stored background selected as the substitute background varies in correspondence with the device movement.Type: ApplicationFiled: June 19, 2008Publication date: December 24, 2009Applicant: MOTOROLA, INC.Inventors: GREGORY J. DUNN, TERANCE B. BLAKE
-
Publication number: 20090303251Abstract: Systems, apparatuses and methods for displaying geo-located imagery are described. The systems may utilize multiple independent quadtree data structures to organize and display geo-located imagery from a variety of sources. A user of the systems described herein may select which geo-located imagery quadtree sets are to be displayed. The user may further select the priority order for the multiple geo-located imagery quadtree sets. The systems may include remote tile servers in communication over the Internet with a local tile server and client at the user's location. The multiple geo-located imagery quadtree sets may include imagery information organized by time of capture, method of capture, and/or source. The imagery may, for example, include photographic imagery, map imagery, and/or charts. The imagery may be supplied by a variety of sources including users and third party imagery providers.Type: ApplicationFiled: June 10, 2008Publication date: December 10, 2009Inventors: Andras Balogh, Tom Blind, Tom Churchill, Vincent Fiano
-
Publication number: 20090304245Abstract: A method is described for structurally individualized simulation of the introduction of a wall support element into a section of a tubular structure. To this end, image data of the interior of the section of the tubular structure are provided. A start point and an end point of the section of the tubular structure are then determined, and a lumen and a profile line of the tubular structure are determined between the start point and the end point. Furthermore, an individual elastic structure model for the section of the tubular structure is identified by adapting a tubular elastic initial model to the section of the tubular structure on the basis of the identified lumen and the profile line, and a tubular elastic wall support element model which is positioned inside the individual structure model is provided.Type: ApplicationFiled: June 5, 2009Publication date: December 10, 2009Inventors: Jan Egger, Bernd Freisleben, Stefan Grosskopf, Carlos Leber
-
Publication number: 20090303024Abstract: An image processing apparatus includes: a guide superimposition portion that obtains a photographed image taken by an imaging device from the imaging device incorporated in a vehicle and superimposes a guide on the photographed image; and a specific region detection portion that detects a specific region which is able to be included in the photographed image. If the specific region is detected by the specific region detection portion, the guide superimposition portion does not superimpose the guide on the specific region.Type: ApplicationFiled: June 4, 2009Publication date: December 10, 2009Applicant: Sanyo Electric Co., Ltd.Inventor: Keisuke ASARI
-
Patent number: 7626596Abstract: In an image reproducing method capable of displaying annotations in virtual space constituted by a group of real panoramic images, if an object for which an annotation is to be displayed falls within the field of view at the position of the observer, the object is adopted as the display position of the annotation. If the object is not within the field of view, then the direction of forward travel of the observer is adopted as the annotation display position. If the object is in a state of transition, then the annotation display position is decided by interpolation.Type: GrantFiled: November 16, 2004Date of Patent: December 1, 2009Assignee: Canon Kabushiki KaishaInventors: Daisuke Kotake, Akihiro Katayama, Yukio Sakagawa, Takaaki Endo, Masahiro Suzuki
-
Publication number: 20090284552Abstract: Methods and systems for operating an avionics system are provided. A predefined set of movements of a headset is detected. In response to the detection of the set of movements, one or more various functions are performed.Type: ApplicationFiled: May 19, 2008Publication date: November 19, 2009Applicant: Honeywell International Inc.Inventors: Brent D. Larson, John G. Suddreth, Frank Cupero
-
Patent number: 7619626Abstract: The present invention provides systems and methods that provide images of an environment to the viewpoint of a display. The systems and methods define a mapping surface at a distance from the image source and display that approximates the environment within the field of view of the image source. The system methods define a model that relates the different geometries of the image source, display, and mapping surface to each other. Using the model and the mapping surface, the systems and methods tile images from the image source, correlate the images to the display, and display the images. In instants where two image sources have overlapping fields of view on the mapping surface, the systems and methods overlap and stitch the images to form a mosaic image. If two overlapping image sources each have images with unique characteristics, the systems and methods fuse the images into a composite image.Type: GrantFiled: March 1, 2003Date of Patent: November 17, 2009Assignee: The Boeing CompanyInventor: Kenneth L. Bernier
-
Patent number: 7609847Abstract: Systems and methods according to the present invention provide techniques to automatically generate an object layout. Various candidate placement positions are evaluated by computing values associated with placing the object at the placement positions. Cost functions associated with contrast, saliency and/or sharpness can be used to evaluate the desirability of each candidate placement position.Type: GrantFiled: November 23, 2004Date of Patent: October 27, 2009Assignee: Hewlett-Packard Development Company, L.P.Inventors: Simon Widdowson, Xiaofan Un
-
Patent number: 7605961Abstract: Hologram production techniques can combine source data representing realistic information describing an environment with source data providing representational information describing a feature of the environment and/or some object interacting with the environment. The combined data is used to produce holograms, and particularly holographic stereograms including features that enhance the visualization of the environment depicted in hologram. A haptic interface can be used in conjunction with such holograms to further aid use of the hologram, and to provide an interface to secondary information provided by a computer and related to the images depicted in the hologram.Type: GrantFiled: March 13, 2007Date of Patent: October 20, 2009Assignee: Zebra Imaging, Inc.Inventors: Michael A. Klug, Mark E. Holzbach, Craig Newswanger
-
Patent number: 7605826Abstract: There is provided a method for augmented reality guided instrument positioning. At least one graphics proximity marker is determined for indicating a proximity of a predetermined portion of an instrument to a target. The at least one graphics proximity marker is rendered such that the proximity of the predetermined portion of the instrument to the target is ascertainable based on a position of a marker on the instrument with respect to the at least one graphics proximity marker.Type: GrantFiled: March 27, 2001Date of Patent: October 20, 2009Assignee: Siemens Corporate Research, Inc.Inventor: Frank Sauer
-
Publication number: 20090256860Abstract: System and method for presenting a visual image of a work site for an earthmoving machine. In one embodiment, target design data for the work site may be received. A spatial location and orientation for an earthmoving machine operating in relation to the work site may also be received. A visual image of at least a portion of the work site may be received from an imaging device mounted to the earthmoving machine. A visual image of the portion of the work site may be displayed with a subset of the design data overlaying the visual image, wherein the subset of the design data relates to the portion of the work site.Type: ApplicationFiled: April 11, 2008Publication date: October 15, 2009Applicant: Caterpillar Trimble Control Technologies LLCInventor: Mark Edward Nichols
-
Publication number: 20090251460Abstract: Described are systems and methods for incorporating the reflection of a user and surrounding environment into a graphical user interface (“GUI”). The resulting reflective user interface helps merge the real world of the user with the artificial world of the computer GUI. A video of the user and the surrounding environment is taken using a video capture device such as a web camera, and the video images are manipulated to create a reflective effect that is incorporated into elements of the GUI to create a reflective user interface. The reflective effect is customized for each different element of the GUI and varies by the size, shape and material depicted in the element. The reflective effect also includes incorporation of shadows and highlights into the reflective user interface, including shadows that are responsive to simulated or actual light sources in the surrounding environment.Type: ApplicationFiled: April 4, 2008Publication date: October 8, 2009Applicant: FUJI XEROX CO., LTD.Inventor: Anthony Dunnigan
-
Patent number: 7589747Abstract: A mixed reality space image generation method for generating a mixed reality space image formed by superimposing a virtual space image onto a real space image obtained by capturing a real space, includes a first image superimposing step of superimposing a first virtual space image onto a real space image based on an occlusion by a real object, wherein the first virtual space image is an outer appearance of the real object, and obtaining information of location and orientation of the real object. In addition, a second virtual image is set based on the information of location and orientation of the real object, and a second image superimposing step superimposes the second virtual space image onto the superimposed image generated in the first image superimposing step without considering the occlusion, wherein the second virtual space image is an annotation associated with the real object.Type: GrantFiled: September 29, 2004Date of Patent: September 15, 2009Assignee: Canon Kabushiki KaishaInventors: Taichi Matsui, Masahiro Suzuki
-
Publication number: 20090201311Abstract: Methods and apparatus for generating a searchable electronic record of a locate operation in which one or more physical locate marks are applied by a technician to indentify a presence or an absence of at least one underground facility within a dig area. A digital image of a geographic area comprising the dig area is electronically received, and at least a portion of the received digital image is displayed on a display device. One or more digital representations of the physical locate mark(s) applied by the locate technician are added to the displayed digital image so as to generate a marked-up digital image. Information relating to the marked-up digital image is electronically transmitted and/or electronically stored so as to generate the searchable electronic record of the locate operation.Type: ApplicationFiled: January 30, 2009Publication date: August 13, 2009Inventors: Steven Nielsen, Curtis Chambers
-
Patent number: 7574070Abstract: An image combining method for combining an image obtained by image sensing real space with a computer-generated image and displaying the combined image. Mask area color information is determined based on a first real image including an object as the subject of mask area and a second real image not including the object, and the color information is registered. The mask area is extracted from the real image by using the registered mask area color information, and the real image and the computer-generated image are combined by using the mask area.Type: GrantFiled: September 24, 2004Date of Patent: August 11, 2009Assignee: Canon Kabushiki KaishaInventors: Kaname Tanimura, Toshikazu Ohshima
-
Publication number: 20090195557Abstract: An index detection unit (1040) detects the coordinate values of indices in a sensed image sensed by an image sensing device (1020) attached with an orientation sensor (1010). A contribution degree calculation unit (1070) acquires contribution degrees according to a frequency at which the image sensing device (1020) is located to have an orientation indicated by orientation information included in the position and orientation information of the image sensing device (1020) at the time of image sensing. A data management unit (1060) generates sets of the coordinate values and orientation information measured by the orientation sensor (1010) at the time of sensing of the sensed image for respective indices. A calibration information calculation unit (1090) calculates an orientation of the orientation sensor (1010) with respect to the image sensing device (1020) using the position and orientation information, parameter values, and the sets generated for the respective indices.Type: ApplicationFiled: January 27, 2009Publication date: August 6, 2009Applicant: CANON KABUSHIKI KAISHAInventor: Naohito Nakamura
-
Patent number: 7567262Abstract: A computer in the form of a client comprises a graphical user interface and a memory, the memory including a first data object for storing the feature data in a first format, and a second data object for storing the image data in a second format. The client computer is configured to send at least one query; receive at least one of the feature data and the image data in response to the at least one query; and display the feature data in the first format and the image data in the second format in the graphical user interface.Type: GrantFiled: February 25, 2005Date of Patent: July 28, 2009Assignee: IDV Solutions LLCInventors: Ian Clemens, Riyaz Prasla, Justin Hoffman
-
Patent number: 7567261Abstract: Systems and methods that provide graphics using a graphical engine are provided. In one example, a system may provide layered graphics in a video environment. The system may include a bus, a graphical engine and a graphical pipeline. The graphical engine may be coupled to the bus and may be adapted to composite a plurality of graphical layers into a composite graphical layer. The graphical engine may include a memory that stores the composite graphical layer. The graphical pipeline may be coupled to the bus and may be adapted to transport the composite graphical layer.Type: GrantFiled: November 7, 2007Date of Patent: July 28, 2009Assignee: Broadcom CorporationInventors: David A. Baer, Darren Neuman
-
Patent number: 7564469Abstract: Methods of interacting with a mixed reality are presented. A mobile device captures an image of a real-world object where the image has content information that can be used to control a mixed reality object through an offered command set. The mixed reality object can be real, virtual, or a mixture of both real and virtual.Type: GrantFiled: January 10, 2008Date of Patent: July 21, 2009Assignee: Evryx Technologies, Inc.Inventor: Ronald Cohen
-
Publication number: 20090153587Abstract: A mixed reality system includes a camera for providing captured image information in an arbitrary work environment; a sensing unit for providing sensed information based on operation of the camera; a process simulation unit for performing simulation on part/facility/process data of the arbitrary work environment, which is stored in a process information database (DB); a process allocation unit for handling allocation status between the data and simulation information; a mixed reality visualization unit for receiving the captured information and the sensed information, determining a location of the process allocation unit, combining the captured information and sensed information with the simulation information, and then outputting resulting information; and a display-based input/output unit for displaying mixed reality output information from the mixed reality visualization unit and inputting information requested by a user. Further, there is provided a method of implementing the same.Type: ApplicationFiled: December 12, 2008Publication date: June 18, 2009Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Hyun KANG, Gun Lee, Wookho Son
-
Patent number: 7540866Abstract: An interface for remotely controlling a medical device in a patient's body provides a two dimensional display of a three dimensional rendering of the operating region, and allows the user to select the orientation or location of the distal end of the medical device on the display and then operate a navigation system to cause the distal end of the medical device to approximately assume the selected orientation or location.Type: GrantFiled: June 6, 2005Date of Patent: June 2, 2009Assignee: Stereotaxis, Inc.Inventors: Raju R. Viswanathan, Walter M. Blume, Jeffrey M. Garibaldi, John Rauch
-
Patent number: 7538782Abstract: A three-dimensional CG manipulation apparatus has a real object, a position and orientation sensor provided to the real object, a support base, a position and orientation sensor provided to the support base, a head-mounted display, a position and orientation sensor provided to the head-mounted display, a sensor information acquisition unit for acquiring information from the sensors, a state determination unit for determining the current state of the real object and support base, a CG data management unit for managing CG data to be displayed, and an image generation unit for generating an image to be displayed on the head-mounted display. In a mixed realty system that presents an image formed by superimposing a virtual CG image on a real object to the user, a manipulation such as replacement or the like can be easily made for the virtual CG image without interrupting the user's operation.Type: GrantFiled: October 1, 2004Date of Patent: May 26, 2009Assignee: Canon Kabushiki KaishaInventors: Tsuyoshi Kuroki, Taichi Matsui
-
Patent number: 7511721Abstract: In a radiographic image connection processing method of connecting plural partial radiographic images, the partial radiographic images are selected, and if appropriate the operation mode is changed, e.g., from an image position adjustment mode, for adjusting a position of the selected partial radiographic image, to an image measurement mode, for measuring the selected partial radiographic image, or vice versa. The selected partial radiographic images are connected. Thus, it is possible to achieve image position adjustment and image measurement during radiographic image connection, and it is also possible to achieve smooth performance of the operations for image position adjustment and image measurement.Type: GrantFiled: August 31, 2004Date of Patent: March 31, 2009Assignee: Canon Kabushiki KaishaInventor: Koji Takekoshi
-
Patent number: 7509570Abstract: The object of the invention is to easily and simply output a document including an image in a desired layout. A script generation device 10 generates a script used to control a layout of a printer 20, and transfers the generated script to the printer 20. The script includes multiple drawing control commands that individually adjust output positions with regard to a plurality of images to be output in a preset output range. The sequence of arrangement of the multiple drawing control commands specifies the overlapping state of the plurality of images. The script of this construction enables the user to readily define and change the overlapping state and thereby attain a desired layout.Type: GrantFiled: October 2, 2002Date of Patent: March 24, 2009Assignee: Seiko Epson CorporationInventor: Hideyuki Narusawa
-
Patent number: 7505620Abstract: In a method for the monitoring of a monitored zone next to and/or in an apparatus having at least one driven movable part, video images are used which were detected in time sequence by at least two video cameras whose fields of view overlap at least partly in an overlap region in the monitored zone. On the basis of the first video images detected by a first one of the video cameras, a first recognition and/or tracking of virtual objects is carried out and, on the basis of the second video images detected by a second one of the video cameras, a second recognition and/or tracking of virtual objects is carried out, the virtual objects corresponding to real objects at least in the overlap region.Type: GrantFiled: April 12, 2005Date of Patent: March 17, 2009Assignee: Sick AGInventors: Ingolf Braune, Jörg Grabinger
-
Publication number: 20090066725Abstract: An information-processing apparatus determines whether a stimulation generation unit and a background virtual object contact each other based on position and orientation information about the stimulation generation unit and position and orientation information about the background virtual object. If it is determined that the stimulation generation unit and the background virtual object contact each other, the information-processing apparatus determines whether the stimulation generation unit is included within an attention range. The information-processing apparatus generates operation setting information for controlling an operation of the stimulation generation unit according to a result of the determination and outputs the generated operation setting information to the stimulation generation unit.Type: ApplicationFiled: September 9, 2008Publication date: March 12, 2009Applicant: CANON KABUSHIKI KAISHAInventors: Atsushi Nogami, Naoki Nishimura
-
Publication number: 20090058878Abstract: A multi-view imaging system which allows efficient and accurate adjustment of optical axes, and the like, of imaging units is disclosed. More than one images acquired with more than one cameras by imaging a subject are subjected to live view image processing to generate more than one live view images. The generated live view images are displayed in a superimposed manner on a display unit, and a vertical guideline extending in a vertical direction of the display unit and a horizontal guideline extending in a horizontal direction of the display unit are displayed on the display unit.Type: ApplicationFiled: August 29, 2008Publication date: March 5, 2009Applicant: FUJIFILM CorporationInventor: Mikio SASAGAWA
-
Publication number: 20090051682Abstract: Methods and apparatus for production of composite images, videos, or films which exhibit virtual objects. More particularly, methods and apparatus for rendering, scaling, and/or locating, virtual objects within composite images, videos, or films employing marker objects as reference objects.Type: ApplicationFiled: May 27, 2008Publication date: February 26, 2009Inventor: Werner Gerhard Lonsing
-
Patent number: 7479967Abstract: The present invention relates to a method and an apparatus for combining virtual reality and real-time environment. The present invention provides a system that combines captured real-time video data and real-time 3D environment rendering to create a fused (combined) environment. The system captures video imagery and processes it to determine which areas should be made transparent (or have other color modifications made), based on sensed cultural features and/or sensor line-of-sight. Sensed features can include electromagnetic radiation characteristics (i.e. color, infra-red, ultra-violet light). Cultural features can include patterns of these characteristics (i.e. object recognition using edge detection). This processed image is then overlaid on a 3D environment to combine the two data sources into a single scene. This creates an effect where a user can look through ‘windows’ in the video image into a 3D simulated world, and/or see other enhanced or reprocessed features of the captured image.Type: GrantFiled: April 11, 2005Date of Patent: January 20, 2009Assignee: Systems Technology Inc.Inventors: Edward N. Bachelder, Noah Brickman
-
Patent number: 7474318Abstract: An interactive system for interacting with a device in a mixed reality environment, the system comprising an object having at least two surfaces, each surface having a marker, an image capturing device to capture images of the object in a first scene, and a microprocessor configured to track the position and orientation of the object in the first scene by identifying a marker. In addition, the microprocessor is configured to respond to manipulation of the object causes the device to perform an associated operation.Type: GrantFiled: May 28, 2004Date of Patent: January 6, 2009Assignee: National University of SingaporeInventors: Zhi Ying Zhou, Adrian David Cheok, Jiun Horng Pan
-
Publication number: 20090002394Abstract: Methods and systems are provided methods and systems for augmenting image data (e.g., still image data or video image data) utilizing image context data to generate panoramic images. In accordance with embodiments hereof, a position and orientation of received image data is utilized to identify image context data (e.g., three-dimensional model data, two-dimensional image data, and/or 360° image data from another source) rendered based upon the same position or a nearby position relative to the image data and the image data is augmented utilizing the identified context data to create a panoramic image. The panoramic image may then be displayed (e.g., shown on a LCD/CRT screen or projected) to create a user experience that is more immersive than the original image data could create.Type: ApplicationFiled: June 29, 2007Publication date: January 1, 2009Applicant: MICROSOFT CORPORATIONInventors: Billy P. Chen, Eyal Ofek
-
Patent number: 7471301Abstract: The invention concerns a method and a system for: (i) producing in a computer processing unit a flow of synthetic images, and (ii) tracing a scene by creating visual interactions between the synthetic image flow and at least one video image flow. The computer processing unit comprises: a motherboard, a graphics board for scene rendering and display, including a processor for accelerating 2D/3D processing, a work buffer and a texture memory, an acquisition means for acquiring in real time video images, in a video buffer. The specific rendering of the scene is carried out: by recopying the video buffer in a memory zone of the graphics board, by tracing the synthetic images in the work buffer.Type: GrantFiled: July 21, 2003Date of Patent: December 30, 2008Assignee: Total ImmersionInventor: Valentin Lefevre
-
Publication number: 20080316203Abstract: An information processing method includes measuring a line-of-sight direction of a user as a first direction, specifying a first point based on the first direction, calculating a three-dimensional position of the first point, setting a first plane based on the three-dimensional position of the first position, measuring a line-of-sight direction of the user after the setting of the first plane as a second direction, specifying a second point included in the first plane based on the second direction, and calculating a three-dimensional position of the second point.Type: ApplicationFiled: May 22, 2008Publication date: December 25, 2008Applicant: CANON KABUSHIKI KAISHAInventors: Christian Sandor, Naoki Nishimura
-
Publication number: 20080291219Abstract: Position and orientation information indicating the relative position and orientation relationship between the viewpoint of the observer and a physical space object on a physical space is acquired. A virtual space image is generated based on the acquired position and orientation information, and is rendered on a memory. A physical space image of the physical space object is acquired. By rendering the acquired physical space image on the memory on which the virtual space image has already been rendered, the physical space image and the virtual space image are combined. The obtained composite image is output.Type: ApplicationFiled: May 2, 2008Publication date: November 27, 2008Applicant: CANON KABUSHIKI KAISHAInventors: Kenji Morita, Tomohiko Shimoyama