Placing Generated Data In Real Scene Patents (Class 345/632)
-
Patent number: 8291324Abstract: A network management system allows a network administrator to intuitively manage all components of a heterogeneous networked computer system using views of any component or any set of components. These views are generated in a multi-dimensional, virtual reality environment. Navigation tools are provided that allow an operator to travel through the network hierarchy's representation in the virtual environment using an automatic flight mode. Automatic flight mode determines a reasonable trajectory to a network component that avoids collisions with intervening objects in the virtual environment. Since the system is capable of managing a world-wide network, city, building, subnet, segment, and computer, a view may also display internal hardware, firmware, and software of any network component. Views of network components may be filtered so only components pertaining to a specific business or other interest are displayed.Type: GrantFiled: September 7, 2001Date of Patent: October 16, 2012Assignee: CA, Inc.Inventors: Reuven Battat, Michael Her, Chandrasekha Sundaresh, Anders Vinberg, Sidney Wang
-
Patent number: 8289367Abstract: A system comprises a stage area and an audience area with a line of sight view of the stage area. The system also includes a first display that reproduces a first video feed of a first perspective of a remote talent. The first video feed may appear, from the perspective of the audience area, to be within a first region of the stage area. The system further includes a first camera directed at the audience area and aligned so that its field of view corresponds to a line of sight from the first region to the audience area. The system additionally includes a second display viewable from a second region of the stage area and hidden from view of the audience area. The second display reproduces a second video feed of a second perspective, different that the first perspective, of the remote talent. The system also includes a second camera directed at the second region of the stage area and aligned so that its field of view corresponds to a line of sight from the second display to the second region.Type: GrantFiled: March 17, 2008Date of Patent: October 16, 2012Assignee: Cisco Technology, Inc.Inventors: Philip R. Graham, Michael H. Paget
-
Patent number: 8285313Abstract: A messaging method using a mobile user terminal, the method including the steps of: creating, at the mobile user terminal, at least one graphical messaging symbol adapted to convey a meaning to a message recipient; preparing message content including at least one of the created graphical messaging symbols using the mobile terminal; and sending the message to a recipient via a communication network.Type: GrantFiled: June 16, 2009Date of Patent: October 9, 2012Assignee: Aristocrat Technologies Australia Pty LimitedInventor: Oliver Boyd Errington
-
Patent number: 8280405Abstract: A wireless networked device incorporating a display, a video camera and a geo-location system receives geo-located data messages from a server system. Messages can be viewed by panning the device, revealing the message's real world location as icons and text overlaid on top of the camera input on the display. The user can reply to the message from her location, add data to an existing message at its original location, send new messages to other users of the system or place a message at a location for other users. World Wide Web geo-located data can be explored using the system's user interface as a browser. The server system uses the physical location of the receiving device to limit messages and data sent to each device according to range and filtering criteria, and can determine line of sight between the device and each actual message to simulate occlusion effects.Type: GrantFiled: December 29, 2006Date of Patent: October 2, 2012Assignee: Aechelon Technology, Inc.Inventors: Ignacio Sanz-Pastor, David L. Morgan, III, Javier Castellar
-
Publication number: 20120229506Abstract: A method whereby an actual image of a TV viewer as captured by a camera housed in the TV, or an emoticon selected by the processor of the TV through the use of facial recognition method, can be displayed on the viewer's display or a friend's display along with title of video, audio captured by a microphone also housed in the viewer's TV, and text inputted by the viewer.Type: ApplicationFiled: March 9, 2011Publication date: September 13, 2012Inventor: Yuko Nishikawa
-
Publication number: 20120229507Abstract: An attitude of an object arranged in a virtual world is controlled based on attitude data outputted from a portable display device. Further, the object is caused to move in the virtual world, based on data based on a load applied to a load detection device. Then, a first image showing the virtual world including at least the object is displayed on the portable display device.Type: ApplicationFiled: November 2, 2011Publication date: September 13, 2012Applicant: NINTENDO CO., LTD.Inventors: Yugo HAYASHI, Kazuya SUMAKI
-
Publication number: 20120212507Abstract: In a view, e.g. of scenery, of a shopping or museum display, or of a meeting or conference, automated processing can be used to annotate objects which are visible from a viewer position. Annotation can be of objects selected by the viewer, and can be displayed visually, for example, with or without an image of the view.Type: ApplicationFiled: March 28, 2012Publication date: August 23, 2012Inventors: Martin Vetterli, Serge Ayer
-
Patent number: 8250157Abstract: Systems, methods, and associated software for detecting presence are described with respect to a number of embodiments of the present disclosure. More particularly, presence information can be displayed on a floor plan, according to the teachings herein. In one implementation, a method for monitoring the presence of a person is described in which the name of a person of interest is received from a requestor. The requestor and the person of interest are both associated with an organization. The method also includes retrieving information regarding a first work area associated with the person of interest and reproducing a section of a floor plan containing at least the first work area. Furthermore, the method includes sending the section of the floor plan to the requestor. The method also includes retrieving information regarding the person of interest associated with the first work area. A name field, which includes the name and the presence status of the person of interest, is sent to the requestor.Type: GrantFiled: June 20, 2008Date of Patent: August 21, 2012Assignee: Oracle International CorporationInventors: Martin Millmore, Dinesh Arora, Michael Rossi, Aaron Green, Paul Brimble
-
Patent number: 8245150Abstract: A parts catalog system is provided. The system may include a processor and a computer-readable medium operatively coupled to the processor and including a memory in which is stored a database configured to catalog collections of data associated with and identifying hardware items. The system may also include a graphical user interface (GUI) configured to display at least some of the data associated with and identifying a hardware item including. The displayed data may include a graphical representation of the hardware item. The displayed data may also include a data field integrated with the graphical representation illustrating the physical significance of the data within the data field with regard to the hardware item illustrated by the graphical representation. The data field may also be configured to accept input to designate a desired value for the data within the data field.Type: GrantFiled: October 6, 2005Date of Patent: August 14, 2012Assignee: Caterpillar Inc.Inventors: James G. Katter, Jr., Dennis L. Faux, David H. Bigelow, William C. Hurt, II
-
Patent number: 8243099Abstract: The present invention relates to a method and system for haptic interaction in augmented reality that can effectively remove noise from real images captured by an image capturing device and minimize discontinuity of force generated in the haptic interaction for the stable and smooth haptic interaction in the augmented reality. The augmented reality system comprising: a marker detecting unit that detects a markers in images; a marker position extracting unit that extracts the positions of the detected markers; a noise removing unit that removes noise from positional information of the markers; a visual rendering unit that augments virtual objects; a motion estimating unit that estimates the motion of the markers over a time; a collision detecting unit that detects collision between the virtual objects and an end point of the haptic device; and a haptic rendering unit that calculates reaction force to be provided through the haptic device.Type: GrantFiled: February 3, 2009Date of Patent: August 14, 2012Assignee: Gwangju Institute of Science and TechnologyInventors: Jeha Ryu, Beom-Chan Lee, Sun-Uk Hwang, Hyeshin Park, Yong-Gu Lee
-
Publication number: 20120203460Abstract: A method and apparatus for providing point of interest (POI) information of a mobile terminal. The method and apparatus extract POI information, where the POI information and/or associated road information is included in an image captured by a camera. Location information of an image capture place and image capture direction information are read from the digital photo image, and the POI information corresponding to the location and image capture direction information is extracted from a map data, and the extracted POI information is thereafter displayed on the digital photo image.Type: ApplicationFiled: April 18, 2012Publication date: August 9, 2012Inventors: Chae-Guk CHO, Ki-Hyung Lee
-
Patent number: 8237703Abstract: A method for generating 3D visualization of a large-scale environment comprising the steps of: acquiring a 3D model of the large-scale environment; dividing the 3D model into a near-field part and a far-field part; rendering an array of images of the far-field part; creating a 3D visualization of the near-field part combined with the array of rendered images of the far-field part and displaying the combined rendered images.Type: GrantFiled: December 20, 2007Date of Patent: August 7, 2012Assignee: Geosim Systems Ltd.Inventors: Victor Shenkar, Yigal Eilam
-
Publication number: 20120194547Abstract: An approach is provided for generating a perspective display. A display manager receives a request to generate a perspective display of one or more items of a location-based user interface, the request specifying first location information associated with a viewing location. The display manager determines to define a surface with respect to the first location information, wherein the surface is divided into an array of cells receives an input, from the device, for selecting a group of the points of interest on the mapping display and captures an image of the mapping display based on the input. The display manager then processes and/or facilitates a processing of second location information associated with the one or more items to map one or more representations of the one or more items onto one or more of the cells.Type: ApplicationFiled: March 17, 2011Publication date: August 2, 2012Applicant: Nokia CorporationInventors: Matthew Johnson, Mark Fulks, Venkata Ayyagari, Kenneth Walker, Jerry Drake, Srikanth Challa, Christophe Marle, Rav Singh
-
Patent number: 8233530Abstract: A method, system and computer program provide a mechanism for smoothing the transition back from a virtual (computer generated) scene to a related video stream. An event such as a user input or timeout is received triggering a return to display of the video stream from a virtual scene related to content of the video stream. A number of time points and/or camera angles are either presented to the user or are automatically searched for the best match. The list may be presented in order according to an automatically detected matching criteria. The virtual scene may a scene constructed locally within a computer or digital video recorder (DVR) and the matching performed locally based on angle and time information provided from a content provider such as a server, or the virtual scene generation and matching may be performed at a remote location such as the content server.Type: GrantFiled: October 28, 2008Date of Patent: July 31, 2012Assignee: International Business Machines CorporationInventors: Jeffrey D. Amsterdam, Gregory J. Boss, Rick A. Hamilton, II, Kulvir S. Bhogal, Brian M. O'Connell, Keith R. Walker
-
Patent number: 8228325Abstract: The present invention is directed to a method of integrating information, including real-time information, into a virtual thematic environment using a computer system, including accessing the stored information from a database or downloading the real-time information from a source external to the thematic environment; inserting the real-time information into the thematic environment; and displaying the information to a user within the thematic environment. In one embodiment, the computer system is connected to a holographic projection system such that the images from the thematic environment can be projected as holographic projections.Type: GrantFiled: March 12, 2008Date of Patent: July 24, 2012Inventor: Frances Barbaro Altieri
-
Publication number: 20120154438Abstract: Systems and methods of interacting with a virtual space, in which a mobile device is used to electronically capture image data of a real-world object, the image data is used to identify information related to the real-world object, and the information is used to interact with software to control at least one of: (a) an aspect of an electronic game; and (b) a second device local to the mobile device. Contemplated systems and methods can be used to gaming, in which the image data can be used to identify a name of the real-world object, to classify the real-world object, identify the real-world object as a player in the game, to identify the real-world object as a goal object or as having some other value in the game, to use the image data to identify the real-world object as a goal object in the game.Type: ApplicationFiled: February 28, 2012Publication date: June 21, 2012Applicant: NANT HOLDINGS IP, LLCInventor: Ronald H. Cohen
-
Publication number: 20120147038Abstract: A method for overlaying first and second images in a common focal plane of a viewer comprises forming the first image and guiding the first and second images along an axis to a pupil of the viewer. The method further comprises adjustably diverging the first and second images at an adaptive diverging optic to bring the first image into focus at the common focal plane, and, adjustably converging the second image at an adaptive converging optic to bring the second image into focus at the common focal plane.Type: ApplicationFiled: December 8, 2010Publication date: June 14, 2012Applicant: MICROSOFT CORPORATIONInventors: Kathryn Stone Perez, Alex Aben-Athar Kipman, Andrew Fuller, Philip Greenhalgh, David Hayes, John Tardif
-
Patent number: 8189864Abstract: A plurality of items of shot image data obtained by temporally continuous shooting are analyzed. Marking data indicating that replaced graphic data is to be combined is added to image data corresponding to an actor and the resulting data is displayed. When a preset gesture (motion) is detected, marking data indicating that replaced graphic data u is to be combined is added to image data corresponding to another actor and the resulting data is displayed. After shooting, the individual items of image data to which marking data have been added are replaced with respective replaced graphic data. Replaced graphic data are created as moving images which capture the motions of the actors.Type: GrantFiled: August 28, 2008Date of Patent: May 29, 2012Assignee: Casio Computer Co., Ltd.Inventor: Takashi Kojo
-
Publication number: 20120120100Abstract: In one aspect, the system and method provides a modified image in response to a request for a street level image at a particular location, wherein the previously captured image is modified to illustrate the current conditions at the requested location. By way of example only, the system and method may use local weather, time of day, traffic or other information to update street level images.Type: ApplicationFiled: January 25, 2012Publication date: May 17, 2012Inventor: Stephen Chau
-
Patent number: 8174541Abstract: The invention provides a method and system for three-dimensional virtual world pattern positioning. The method includes creating a three-dimensional pattern for a virtual world environment, sub-dividing the pattern into a plurality of sub-divisions each having a vector relative to a center of the pattern, creating a transform including a description of the pattern and shape information for each sub-division, creating a portion of a virtual world environment by positioning the pattern and sub-divisions, and storing the transform for reusing the pattern and sub-divisions in another virtual world environment.Type: GrantFiled: January 19, 2009Date of Patent: May 8, 2012Assignee: International Business Machines CorporationInventors: Richard Greene, Conor P. Beverland, Florence Hirondel, Ailun Yi, Tim Kock
-
Publication number: 20120092367Abstract: A real world image captured by a real camera such as an outside right imaging unit 23b is acquired, a synthesized image is generated by synthesizing the acquired real world image and a virtual image depicting a first virtual object such as an enemy object EO, in such a manner that the first virtual object such as an enemy object EO appears to be present behind the real world image, and the synthesized image thus generated is displayed on a display device.Type: ApplicationFiled: September 13, 2011Publication date: April 19, 2012Applicants: HAL LABORATORY, INC., NINTENDO CO., LTD.Inventors: Toshiaki SUZUKI, Shigefumi KAWASE
-
Patent number: 8154548Abstract: A measured value of a physical quantity measured using a measuring device within a predetermined region on a real space, and a position where the measuring device performs measurement are acquired (S1001 to S1003). An analytic value of the physical quantity at that position in the predetermined region is calculated (S1004). A first object representing the measured value and a second object representing the analytic value are arranged at a place corresponding to this position on a virtual space having a coordinate system which matches the coordinate system of the real space (S1005, S1006). An image which is seen upon viewing the virtual space on which the first and second objects are arranged from a viewpoint is generated, and the generated image is output (S1008, S1009).Type: GrantFiled: September 26, 2006Date of Patent: April 10, 2012Assignee: Canon Kabushiki KaishaInventor: Tsuyoshi Kuroki
-
Publication number: 20120069050Abstract: According to an embodiment of the present invention, a method for detecting an object includes receiving, on a transparent display, object selection information, determining an eye direction associated with the received object selection information, selecting at least one object displayed within a region of the transparent display defined by the received object selection information based on the determined eye direction, acquiring information on the selected object, and displaying the acquired information on the transparent display.Type: ApplicationFiled: September 14, 2011Publication date: March 22, 2012Inventors: Heeyeon PARK, Yeonjoo Joo, Sunju Park
-
Publication number: 20120062594Abstract: Aspects of the present invention relate to methods and systems for capturing, sharing and recording the information on a collaborative writing surface. According to a first aspect of the present invention, currently persistent collaborative-writing-surface content may be imaged during periods of occlusion by an occluder. According to a second aspect of the present invention, the occluder may be imaged as transparent allowing the visibility of the occluded currently persistent collaborative-writing-surface content. According to a third aspect of the present invention, the occluder may be imaged as a silhouette allowing visibility of the occluded currently persistent collaborative-writing-surface content.Type: ApplicationFiled: September 15, 2010Publication date: March 15, 2012Inventor: Richard John Campbell
-
Patent number: 8130242Abstract: Systems and methods of interacting with a virtual space, in which a mobile device is used to electronically capture image data of a real-world object, the image data is used to identify information related to the real-world object, and the information is used to interact with software to control at least one of: (a) an aspect of an electronic game; and (b) a second device local to the mobile device. Contemplated systems and methods can be used to gaming, in which the image data can be used to identify a name of the real-world object, to classify the real-world object, identify the real-world object as a player in the game, to identify the real-world object as a goal object or as having some other value in the game, to use the image data to identify the real-world object as a goal object in the game.Type: GrantFiled: August 25, 2006Date of Patent: March 6, 2012Assignee: Nant Holdings IP, LLCInventor: Ronald H. Cohen
-
Patent number: 8130243Abstract: A degree of overlapping between separately obtained regions of respective parts of an image is determined. When the degree of overlapping is greater than a defined value, only one of the obtained regions, or both the regions, or a region including both the regions is selected. When the degree of overlapping is less than the defined value, the separately obtained regions of respective parts of the image are separately selected. The image is displayed while reflecting a result of the selection.Type: GrantFiled: July 2, 2008Date of Patent: March 6, 2012Assignee: Canon Kabushiki KaishaInventors: Koji Sudo, Chiyumi Niwa, Nobukazu Yoshida
-
Publication number: 20120050323Abstract: A solution for managing a videoconference is provided. Multiple virtual backgrounds can be stored, and a virtual background can be selected to be used for a first participant when he/she is conducting a videoconference with a second participant. The virtual background can be selected based on one or more attributes of the first and/or second participant, one or more attributes of the videoconference, and/or the like. The virtual backgrounds can be utilized, for example, to provide a desired perception, message, and/or the like, of a business entity to individuals outside of the business entity that are interacting with its personnel via videoconferencing.Type: ApplicationFiled: August 25, 2010Publication date: March 1, 2012Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Rudolph C. Baron, JR., Andrew R. Jones, Michael L. Massimi, Kevin C. McConnell
-
Publication number: 20120038667Abstract: Embodiments of the invention generally relate to replicating changes between corresponding real objects and virtual objects in a virtual world. Embodiments of the invention may include receiving a request to generate a virtual item in a virtual world based on a real-world object, generating the virtual item, synchronizing the virtual item and real-world object, and sharing the virtual item with a second avatar in the virtual world.Type: ApplicationFiled: August 11, 2010Publication date: February 16, 2012Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael J. Branson, Gregory R. Hintermeister
-
Patent number: 8103125Abstract: In an embodiment, a request is received that includes a specification of a static location and a dynamic location. A static image is created that includes a map that represents an area centered around the static location. A dynamic image is created asynchronously from the creation of the static image. An amalgamated image is generated that includes the static image and the dynamic image, which is over a portion of the static image. In this way, spatial data may be drawn in a manner that increases performance.Type: GrantFiled: March 13, 2007Date of Patent: January 24, 2012Assignee: International Business Machines CorporationInventor: Maykel Martin
-
Patent number: 8099462Abstract: To allow for greater interactivity in video chat environments, displaying multiple effects from a sender of a first electronic device to a chat client of a second electronic device on a video chat region starts with the sender selecting an effect. If the selected effect has a predetermined mapping interactive effect, the interactive effect is displayed on the chat client. Then, the sender applies the effect or the interactive effect to an application region of the video chat region of the chat client. The application region is a partial or an entire region of the video chat region.Type: GrantFiled: April 28, 2008Date of Patent: January 17, 2012Assignee: CyberLink Corp.Inventors: Dueg-Uei Sheng, Teng-Yuan Hsiao
-
Patent number: 8085990Abstract: The claimed subject matter relates to a computer-implemented architecture that can generate a map. The map can be a hybrid between an orthographic projection map and street-side images, thus including useful aspects from both types of representations. For example, an orthographic projection map is very effective at presenting global relationships among the features of the map but not effective at presenting local detail. In contrast, street -side images show excellent detail but do not convey the global information of an orthographic projection map. The hybrid map can thus provide a richer set of information than conventional maps and can also display objects/features of the hybrid map in multiple perspectives simultaneously on a single representation that is printable.Type: GrantFiled: October 8, 2010Date of Patent: December 27, 2011Assignee: Microsoft CorporationInventor: Eyal Ofek
-
Patent number: 8085267Abstract: According to embodiments of the invention, rays may be stochastically culled before they are issued into the three-dimensional scene. Stochastically culling rays may reduce the number of rays which need to be traced by the image processing system. Furthermore, by stochastically culling rays before they are issued into the three-dimensional scene, minor imperfections may be added to the final rendered image, thereby improving the realism of the rendered image. Therefore, stochastic culling of rays may improve the performance of the image processing system by reducing workload imposed on the image processing system and improving the realism of the images rendered by the image processing system. According to another embodiment of the invention, the realism of images rendered by the image processing system may also be improved by stochastically adding secondary rays after ray-primitive intersections have occurred.Type: GrantFiled: January 30, 2007Date of Patent: December 27, 2011Assignee: International Business Machines CorporationInventors: Jeffrey Douglas Brown, Russell Dean Hoover, Eric Oliver Mejdrich
-
Publication number: 20110304646Abstract: A first image processing apparatus displays markers on a monitor to thereby make a second image processing apparatus perform a display control of a second object on an imaged image of an LCD while the second image processing apparatus transmits a marker recognizing signal when the display control is performed based on the markers to thereby make the first image processing apparatus perform a display control of a first object on the monitor.Type: ApplicationFiled: August 27, 2010Publication date: December 15, 2011Applicant: NINTENDO CO., LTD.Inventor: Shunsaku KATO
-
Publication number: 20110298823Abstract: An information processing section of a game apparatus executes a program including: a step of obtaining an image captured by an outer camera; a step of calculating, when detection of a marker is completed, a position and an orientation of a virtual camera based on a result of the marker detection; a step of obtaining hand-drawn data; a step of capturing, with the virtual camera, a fundamental polygon to which a texture is applied to generate a hand-drawn image, and displaying, on an upper LCD, an image in which the hand-drawn image is superimposed on the camera image; and a step of displaying a hand-drawn input image on a lower LCD.Type: ApplicationFiled: August 24, 2010Publication date: December 8, 2011Applicant: NINTENDO CO., LTD.Inventor: Shinji KITAHARA
-
Patent number: 8072470Abstract: An invention is provided for affording a real-time three-dimensional interactive environment using a three-dimensional camera. The invention includes obtaining two-dimensional data values for a plurality of pixels representing a physical scene, and obtaining a depth value for each pixel of the plurality of pixels using a depth sensing device. Each depth value indicates a distance from a physical object in the physical scene to the depth sensing device. At least one computer-generated virtual object is inserted into the scene, and an interaction between a physical object in the scene and the virtual object is detected based on coordinates of the virtual object and the obtained depth values.Type: GrantFiled: May 29, 2003Date of Patent: December 6, 2011Assignee: Sony Computer Entertainment Inc.Inventor: Richard Marks
-
Publication number: 20110292076Abstract: An apparatus for enabling provision of a localized virtual reality environment may include at least one processor and at least one memory including computer program code. The at least one memory and the computer program code may be configured, with the processor, to cause the apparatus to perform at least receiving information indicative of a current location of a mobile terminal, receiving information indicative of an orientation of the mobile terminal with respect to the current location, causing a stored image including a panoramic view of the current location to be displayed at the mobile terminal based on the orientation, and enabling provision of a virtual object on the panoramic view. A corresponding method and computer program product are also provided.Type: ApplicationFiled: May 28, 2010Publication date: December 1, 2011Inventors: Jason Robert Wither, Ronald Azuma
-
Publication number: 20110292078Abstract: A handheld display device for displaying an image of a physical page relative to which the device is positioned. The device includes: an image sensor for capturing an image of the physical page; a transceiver for receiving a page description corresponding to a page identity of the physical page; and a processor configured for: rendering a page image based on the received page description; estimating a first pose of the device relative to the physical page; estimating a second pose of the device relative to a user's viewpoint; and determining a projected page image using the rendered page image, the first pose and the second pose; and a display screen for displaying the projected page image. The display screen provides a virtual transparent viewport onto the physical page irrespective of a position and orientation of said device relative to said physical page.Type: ApplicationFiled: March 18, 2011Publication date: December 1, 2011Inventors: Paul Lapstun, Kia Silverbrook, Robert Dugald Gates
-
Publication number: 20110292079Abstract: A parking assist apparatus includes a display portion mounted at a vehicle and displaying a parking assist image in which an estimated course line generated in association with an operation of a steering wheel is superimposed on a captured image of a surrounding of the vehicle, and an auxiliary image controlling portion displaying an auxiliary image in place of the parking assist image in a state where the parking assist image is displayed on the display portion, the auxiliary image including a virtual vehicle representing the vehicle and a virtual estimated course line corresponding to the estimated course line for the virtual vehicle.Type: ApplicationFiled: March 28, 2011Publication date: December 1, 2011Applicant: AISIN SEIKI KABUSHIKI KAISHAInventors: Hiroyasu HOSOI, Noboru NAGAMINE, Koichiro HONDA, Keigo IKEDA
-
Publication number: 20110292077Abstract: A method of displaying an image of a physical page relative to which a handheld display device is positioned. The method includes the steps of: capturing an image of the physical page using an image sensor of the device; determining a page identity for the physical page; retrieving a page description corresponding to the page identity; rendering a page image based on the retrieved page description; estimating a first pose of the device relative to the physical page; estimating a second pose of the device relative to a user's viewpoint; determining a projected page image for display by the device; and displaying said projected page image on a display screen of said device. The display screen provides a virtual transparent viewport onto the physical page irrespective of a position and orientation of the device relative to the physical page.Type: ApplicationFiled: March 18, 2011Publication date: December 1, 2011Inventors: Paul Lapstun, Kia Silverbrook, Robert Dugald Gates
-
Patent number: 8063915Abstract: A 3D surface wound, injury, and personal protective equipment (PPE) data entry system provides an easily usable graphical user interface through which an examiner can objectively record data relating to surface wounds and injuries sustained by a subject human, as well as PPE used when the wounds/injuries were sustained. The system includes a 3D human model onto which the examiner draws the surface wound(s) and/or damage to the PPE. The subject human's record is stored in a database of similar records. The database records comprise quantifiable, objective data that is easily compared and analyzed. An analysis tool can aggregate a selected population of human subjects within the database to create wound density information that can be statistically analyzed and/or displayed on a standard 3D human model. Such objective wound density information may facilitate improved medical and/or tactical training, and improved PPE design.Type: GrantFiled: June 1, 2007Date of Patent: November 22, 2011Assignee: Simquest LLCInventors: Howard Champion, Paul Sherman, Mary M. Lawnick, Paul M. Cashman, Harald Scheirich, Timothy Patrick Kelliher
-
Patent number: 8045825Abstract: A left-eye color discrimination unit (1101) and right-eye color discrimination unit (1102) generate mask images from virtual space images. If an error part exists at the boundary between a chroma key region and non-chroma key region in the mask image, each of a left-eye mask correction unit (1108) and right-eye mask correction unit (1110) corrects the error part using another mask image generated based on the other virtual space image in addition to the virtual space image.Type: GrantFiled: April 24, 2008Date of Patent: October 25, 2011Assignee: Canon Kabushiki KaishaInventors: Tomohiko Shimoyama, Takuya Tsujimoto, Tomohiko Takayama
-
Patent number: 8033914Abstract: A game apparatus includes an LCD and is provided a touch panel in relation to the LCD. The LCD displays a game screen for making a player character hit a ball. For example, when touch-on is performed on the LCD (touch panel), the stance and shot power of the player character are decided according to coordinates of the touch-on position. With this, the path of the ball is decided to be a straight ball, draw ball or fade ball, and the carry of the ball is decided with regard to the shot power. Following the touch-on, when a slide operation is performed, an impact is decided according to the slide operation. For example, the path of the ball is changed by the decided impact.Type: GrantFiled: September 28, 2005Date of Patent: October 11, 2011Assignee: Nintendo Co., Ltd.Inventors: Yasuo Yoshikawa, Takahiro Harada, Toyokazu Nonaka
-
Patent number: 8031210Abstract: A method of rendering a computer generated 3D scene integrated with a base image, the method comprising loading a base image, such as a photograph, and a computer generated 3D scene or model. The base image is displayed on a monitor, and in one embodiment the calibration of the camera which generated the base image is determined. The 3D model is rendered as an overlay of the base image responsive thereto. In another embodiment, the camera calibration of the 3D scene is made consonant with the base image by selecting corresponding points. The base image is then additionally displayed at a predetermined transparency as an overlay of the 3D model. A user then selects pixels of the base image overlay for placement in the foreground. The selected pixels are then displayed without transparency and the balance of the base image overlay removed rendering the integrated image.Type: GrantFiled: September 30, 2007Date of Patent: October 4, 2011Assignee: RDV Systems Ltd.Inventors: Nathan Elsberg, Alex Hazanov
-
Publication number: 20110234631Abstract: Apparatuses and techniques relating to an augmented reality (AR) device are provided. The device for augmenting a real-world image includes a light source information generating unit that generates light source information for a real-world image captured by a real-world image capturing device based on the location, the time, and the date the real-world image was captured. The light source information includes information on the position of a real-world light source for the real-world image. The device further includes a shadow image registration unit that receives the light source information generated from the light source information generating unit. The shadow image registration unit generates a shadow image of a virtual object overlaid onto the real-world image based on the light source information generated from the light source information generating unit.Type: ApplicationFiled: March 25, 2010Publication date: September 29, 2011Applicant: BIZMODELINE CO., LTD.Inventors: Jae-Hyung KIM, Jong-Cheol HONG, Jong-Min YOON, Ho-Jong JUNG
-
Patent number: 8026931Abstract: Digital video effects are described. In one aspect, a foreground object in a video stream is identified. The video stream comprises multiple image frames. The foreground object is modified by rendering a 3-dimensional (3-D) visual feature over the foreground object for presentation to a user in a modified video stream. Pose of the foreground object is tracked in 3-D space across respective ones of the image frames to identify when the foreground object changes position in respective ones of the image frames. Based on this pose tracking, aspect ratio of the 3-D visual feature is adaptively modified and rendered over the foreground object in corresponding image frames for presentation to the user in the modified video stream.Type: GrantFiled: August 28, 2006Date of Patent: September 27, 2011Assignee: Microsoft CorporationInventors: Jian Sun, Qiang Wang, Weiwei Zhang, Xiaoou Tang, Heung-Yeung Shum
-
Publication number: 20110227944Abstract: A vehicle display system displays enhanced vision (EV) and captured images, for example synthetic vision (SV) images, to an operator of a vehicle. The display system includes an EV vision system for generating EV images, an SV database containing information regarding terrain and objects of interest for a travel path of a vehicle, an SV system for generating SV images based on travel of the vehicle and information from the SV database, a processor for filtering the EV images and merging the filtered EV image with the SV image, and a display for displaying the merged SV and filtered EV images.Type: ApplicationFiled: March 16, 2010Publication date: September 22, 2011Applicant: HONEYWELL INTERNATIONAL INC.Inventors: Thea L. Feyereisen, John G. Suddreth, Troy Nichols
-
Patent number: 8022967Abstract: An image processing method includes the steps of acquiring an image of a physical space, acquiring a position and orientation of a viewpoint of the image, generating an image of a virtual object, detecting an area which consists of pixels each having a predetermined pixel value, and superimposing the image of the virtual object on the image of the physical space. The superimposition step includes calculating a distance between a position of the virtual object and a position of the viewpoint, acquiring an instruction indicating whether or not the virtual object is emphasis-displayed, and setting a flag indicating whether or not the image of the virtual object is to be set as a masked target. The masking process image of the virtual object is superimposed or not on the image of the physical space depending if the image of the virtual object is set as the masked target.Type: GrantFiled: June 1, 2005Date of Patent: September 20, 2011Assignee: Canon Kabushiki KaishaInventors: Yasuhiro Okuno, Toshikazu Ohshima, Kaname Tanimura
-
Patent number: 8018471Abstract: Various technologies and techniques are disclosed that merge components on a design surface. The system receives input from a user to add components or clauses to a design surface and analyzes the components to determine if some of the components can be merged. If the system identifies components that can be merged, then the system merges the identified components to reduce the number of components present on the design surface. The system determines that some components can be merged if the components meet the same criteria, such as having components that are equivalent and that have the same number of incoming paths or the same number of outgoing paths. The system provides a visual indicator on the design surface to indicate that components are being merged. The system provides an undo feature to allow the user to undo the component merging when desired.Type: GrantFiled: May 15, 2006Date of Patent: September 13, 2011Assignee: Microsoft CorporationInventors: Nagalinga Durga Prasad Sripathi Panditharadhya, John Edward Churchill, Udaya Kumar Bhaskara
-
Patent number: 8009899Abstract: Image filling methods. A plurality of images corresponding to a target object or a scene are captured at various angles. An epipolar geometry relationship between a filling source image and a specific image within the images is calculated. The filling source image and the specific image are rectified according to the epipolar geometry relationship. At least one filling target area in the rectified specific image is patched according to the rectified filling source image.Type: GrantFiled: June 22, 2007Date of Patent: August 30, 2011Assignee: Industrial Technology Research InstituteInventors: Chia-Chen Chen, Cheng-Yuan Tang, Yi-Leh Wu, Chi-Tsung Liu
-
Patent number: RE43216Abstract: Smooth, stable and high quality game image is provided by accurately pre-reading background data required for image processing each time. The game device therefore reads background data required for a game that displays a condition of a moving vehicle within a virtual three-dimensional space together with a background in the main memory from a CD-ROM (recording medium) prior to image processing. This device comprises a pre-reading unit for pre-reading background data from a recording medium when reading a start line (reference line) set at a distant position in a specified distance away from the limit line of the visual field direction of display is crossing a new area. A recording medium is a medium that records background data by dividing it into a plurality of areas in advance, and the pre-reading unit comprises a unit for judging on which of the areas the reference line is crossing, and a reading unit for reading in memory the background data of the area judged as being crossed with the reference line.Type: GrantFiled: February 7, 2008Date of Patent: February 28, 2012Assignee: Kabushiki Kaisha SegaInventor: Masaaki Ito