Placing Generated Data In Real Scene Patents (Class 345/632)
  • Patent number: 8291324
    Abstract: A network management system allows a network administrator to intuitively manage all components of a heterogeneous networked computer system using views of any component or any set of components. These views are generated in a multi-dimensional, virtual reality environment. Navigation tools are provided that allow an operator to travel through the network hierarchy's representation in the virtual environment using an automatic flight mode. Automatic flight mode determines a reasonable trajectory to a network component that avoids collisions with intervening objects in the virtual environment. Since the system is capable of managing a world-wide network, city, building, subnet, segment, and computer, a view may also display internal hardware, firmware, and software of any network component. Views of network components may be filtered so only components pertaining to a specific business or other interest are displayed.
    Type: Grant
    Filed: September 7, 2001
    Date of Patent: October 16, 2012
    Assignee: CA, Inc.
    Inventors: Reuven Battat, Michael Her, Chandrasekha Sundaresh, Anders Vinberg, Sidney Wang
  • Patent number: 8289367
    Abstract: A system comprises a stage area and an audience area with a line of sight view of the stage area. The system also includes a first display that reproduces a first video feed of a first perspective of a remote talent. The first video feed may appear, from the perspective of the audience area, to be within a first region of the stage area. The system further includes a first camera directed at the audience area and aligned so that its field of view corresponds to a line of sight from the first region to the audience area. The system additionally includes a second display viewable from a second region of the stage area and hidden from view of the audience area. The second display reproduces a second video feed of a second perspective, different that the first perspective, of the remote talent. The system also includes a second camera directed at the second region of the stage area and aligned so that its field of view corresponds to a line of sight from the second display to the second region.
    Type: Grant
    Filed: March 17, 2008
    Date of Patent: October 16, 2012
    Assignee: Cisco Technology, Inc.
    Inventors: Philip R. Graham, Michael H. Paget
  • Patent number: 8285313
    Abstract: A messaging method using a mobile user terminal, the method including the steps of: creating, at the mobile user terminal, at least one graphical messaging symbol adapted to convey a meaning to a message recipient; preparing message content including at least one of the created graphical messaging symbols using the mobile terminal; and sending the message to a recipient via a communication network.
    Type: Grant
    Filed: June 16, 2009
    Date of Patent: October 9, 2012
    Assignee: Aristocrat Technologies Australia Pty Limited
    Inventor: Oliver Boyd Errington
  • Patent number: 8280405
    Abstract: A wireless networked device incorporating a display, a video camera and a geo-location system receives geo-located data messages from a server system. Messages can be viewed by panning the device, revealing the message's real world location as icons and text overlaid on top of the camera input on the display. The user can reply to the message from her location, add data to an existing message at its original location, send new messages to other users of the system or place a message at a location for other users. World Wide Web geo-located data can be explored using the system's user interface as a browser. The server system uses the physical location of the receiving device to limit messages and data sent to each device according to range and filtering criteria, and can determine line of sight between the device and each actual message to simulate occlusion effects.
    Type: Grant
    Filed: December 29, 2006
    Date of Patent: October 2, 2012
    Assignee: Aechelon Technology, Inc.
    Inventors: Ignacio Sanz-Pastor, David L. Morgan, III, Javier Castellar
  • Publication number: 20120229506
    Abstract: A method whereby an actual image of a TV viewer as captured by a camera housed in the TV, or an emoticon selected by the processor of the TV through the use of facial recognition method, can be displayed on the viewer's display or a friend's display along with title of video, audio captured by a microphone also housed in the viewer's TV, and text inputted by the viewer.
    Type: Application
    Filed: March 9, 2011
    Publication date: September 13, 2012
    Inventor: Yuko Nishikawa
  • Publication number: 20120229507
    Abstract: An attitude of an object arranged in a virtual world is controlled based on attitude data outputted from a portable display device. Further, the object is caused to move in the virtual world, based on data based on a load applied to a load detection device. Then, a first image showing the virtual world including at least the object is displayed on the portable display device.
    Type: Application
    Filed: November 2, 2011
    Publication date: September 13, 2012
    Applicant: NINTENDO CO., LTD.
    Inventors: Yugo HAYASHI, Kazuya SUMAKI
  • Publication number: 20120212507
    Abstract: In a view, e.g. of scenery, of a shopping or museum display, or of a meeting or conference, automated processing can be used to annotate objects which are visible from a viewer position. Annotation can be of objects selected by the viewer, and can be displayed visually, for example, with or without an image of the view.
    Type: Application
    Filed: March 28, 2012
    Publication date: August 23, 2012
    Inventors: Martin Vetterli, Serge Ayer
  • Patent number: 8250157
    Abstract: Systems, methods, and associated software for detecting presence are described with respect to a number of embodiments of the present disclosure. More particularly, presence information can be displayed on a floor plan, according to the teachings herein. In one implementation, a method for monitoring the presence of a person is described in which the name of a person of interest is received from a requestor. The requestor and the person of interest are both associated with an organization. The method also includes retrieving information regarding a first work area associated with the person of interest and reproducing a section of a floor plan containing at least the first work area. Furthermore, the method includes sending the section of the floor plan to the requestor. The method also includes retrieving information regarding the person of interest associated with the first work area. A name field, which includes the name and the presence status of the person of interest, is sent to the requestor.
    Type: Grant
    Filed: June 20, 2008
    Date of Patent: August 21, 2012
    Assignee: Oracle International Corporation
    Inventors: Martin Millmore, Dinesh Arora, Michael Rossi, Aaron Green, Paul Brimble
  • Patent number: 8245150
    Abstract: A parts catalog system is provided. The system may include a processor and a computer-readable medium operatively coupled to the processor and including a memory in which is stored a database configured to catalog collections of data associated with and identifying hardware items. The system may also include a graphical user interface (GUI) configured to display at least some of the data associated with and identifying a hardware item including. The displayed data may include a graphical representation of the hardware item. The displayed data may also include a data field integrated with the graphical representation illustrating the physical significance of the data within the data field with regard to the hardware item illustrated by the graphical representation. The data field may also be configured to accept input to designate a desired value for the data within the data field.
    Type: Grant
    Filed: October 6, 2005
    Date of Patent: August 14, 2012
    Assignee: Caterpillar Inc.
    Inventors: James G. Katter, Jr., Dennis L. Faux, David H. Bigelow, William C. Hurt, II
  • Patent number: 8243099
    Abstract: The present invention relates to a method and system for haptic interaction in augmented reality that can effectively remove noise from real images captured by an image capturing device and minimize discontinuity of force generated in the haptic interaction for the stable and smooth haptic interaction in the augmented reality. The augmented reality system comprising: a marker detecting unit that detects a markers in images; a marker position extracting unit that extracts the positions of the detected markers; a noise removing unit that removes noise from positional information of the markers; a visual rendering unit that augments virtual objects; a motion estimating unit that estimates the motion of the markers over a time; a collision detecting unit that detects collision between the virtual objects and an end point of the haptic device; and a haptic rendering unit that calculates reaction force to be provided through the haptic device.
    Type: Grant
    Filed: February 3, 2009
    Date of Patent: August 14, 2012
    Assignee: Gwangju Institute of Science and Technology
    Inventors: Jeha Ryu, Beom-Chan Lee, Sun-Uk Hwang, Hyeshin Park, Yong-Gu Lee
  • Publication number: 20120203460
    Abstract: A method and apparatus for providing point of interest (POI) information of a mobile terminal. The method and apparatus extract POI information, where the POI information and/or associated road information is included in an image captured by a camera. Location information of an image capture place and image capture direction information are read from the digital photo image, and the POI information corresponding to the location and image capture direction information is extracted from a map data, and the extracted POI information is thereafter displayed on the digital photo image.
    Type: Application
    Filed: April 18, 2012
    Publication date: August 9, 2012
    Inventors: Chae-Guk CHO, Ki-Hyung Lee
  • Patent number: 8237703
    Abstract: A method for generating 3D visualization of a large-scale environment comprising the steps of: acquiring a 3D model of the large-scale environment; dividing the 3D model into a near-field part and a far-field part; rendering an array of images of the far-field part; creating a 3D visualization of the near-field part combined with the array of rendered images of the far-field part and displaying the combined rendered images.
    Type: Grant
    Filed: December 20, 2007
    Date of Patent: August 7, 2012
    Assignee: Geosim Systems Ltd.
    Inventors: Victor Shenkar, Yigal Eilam
  • Publication number: 20120194547
    Abstract: An approach is provided for generating a perspective display. A display manager receives a request to generate a perspective display of one or more items of a location-based user interface, the request specifying first location information associated with a viewing location. The display manager determines to define a surface with respect to the first location information, wherein the surface is divided into an array of cells receives an input, from the device, for selecting a group of the points of interest on the mapping display and captures an image of the mapping display based on the input. The display manager then processes and/or facilitates a processing of second location information associated with the one or more items to map one or more representations of the one or more items onto one or more of the cells.
    Type: Application
    Filed: March 17, 2011
    Publication date: August 2, 2012
    Applicant: Nokia Corporation
    Inventors: Matthew Johnson, Mark Fulks, Venkata Ayyagari, Kenneth Walker, Jerry Drake, Srikanth Challa, Christophe Marle, Rav Singh
  • Patent number: 8233530
    Abstract: A method, system and computer program provide a mechanism for smoothing the transition back from a virtual (computer generated) scene to a related video stream. An event such as a user input or timeout is received triggering a return to display of the video stream from a virtual scene related to content of the video stream. A number of time points and/or camera angles are either presented to the user or are automatically searched for the best match. The list may be presented in order according to an automatically detected matching criteria. The virtual scene may a scene constructed locally within a computer or digital video recorder (DVR) and the matching performed locally based on angle and time information provided from a content provider such as a server, or the virtual scene generation and matching may be performed at a remote location such as the content server.
    Type: Grant
    Filed: October 28, 2008
    Date of Patent: July 31, 2012
    Assignee: International Business Machines Corporation
    Inventors: Jeffrey D. Amsterdam, Gregory J. Boss, Rick A. Hamilton, II, Kulvir S. Bhogal, Brian M. O'Connell, Keith R. Walker
  • Patent number: 8228325
    Abstract: The present invention is directed to a method of integrating information, including real-time information, into a virtual thematic environment using a computer system, including accessing the stored information from a database or downloading the real-time information from a source external to the thematic environment; inserting the real-time information into the thematic environment; and displaying the information to a user within the thematic environment. In one embodiment, the computer system is connected to a holographic projection system such that the images from the thematic environment can be projected as holographic projections.
    Type: Grant
    Filed: March 12, 2008
    Date of Patent: July 24, 2012
    Inventor: Frances Barbaro Altieri
  • Publication number: 20120154438
    Abstract: Systems and methods of interacting with a virtual space, in which a mobile device is used to electronically capture image data of a real-world object, the image data is used to identify information related to the real-world object, and the information is used to interact with software to control at least one of: (a) an aspect of an electronic game; and (b) a second device local to the mobile device. Contemplated systems and methods can be used to gaming, in which the image data can be used to identify a name of the real-world object, to classify the real-world object, identify the real-world object as a player in the game, to identify the real-world object as a goal object or as having some other value in the game, to use the image data to identify the real-world object as a goal object in the game.
    Type: Application
    Filed: February 28, 2012
    Publication date: June 21, 2012
    Applicant: NANT HOLDINGS IP, LLC
    Inventor: Ronald H. Cohen
  • Publication number: 20120147038
    Abstract: A method for overlaying first and second images in a common focal plane of a viewer comprises forming the first image and guiding the first and second images along an axis to a pupil of the viewer. The method further comprises adjustably diverging the first and second images at an adaptive diverging optic to bring the first image into focus at the common focal plane, and, adjustably converging the second image at an adaptive converging optic to bring the second image into focus at the common focal plane.
    Type: Application
    Filed: December 8, 2010
    Publication date: June 14, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Kathryn Stone Perez, Alex Aben-Athar Kipman, Andrew Fuller, Philip Greenhalgh, David Hayes, John Tardif
  • Patent number: 8189864
    Abstract: A plurality of items of shot image data obtained by temporally continuous shooting are analyzed. Marking data indicating that replaced graphic data is to be combined is added to image data corresponding to an actor and the resulting data is displayed. When a preset gesture (motion) is detected, marking data indicating that replaced graphic data u is to be combined is added to image data corresponding to another actor and the resulting data is displayed. After shooting, the individual items of image data to which marking data have been added are replaced with respective replaced graphic data. Replaced graphic data are created as moving images which capture the motions of the actors.
    Type: Grant
    Filed: August 28, 2008
    Date of Patent: May 29, 2012
    Assignee: Casio Computer Co., Ltd.
    Inventor: Takashi Kojo
  • Publication number: 20120120100
    Abstract: In one aspect, the system and method provides a modified image in response to a request for a street level image at a particular location, wherein the previously captured image is modified to illustrate the current conditions at the requested location. By way of example only, the system and method may use local weather, time of day, traffic or other information to update street level images.
    Type: Application
    Filed: January 25, 2012
    Publication date: May 17, 2012
    Inventor: Stephen Chau
  • Patent number: 8174541
    Abstract: The invention provides a method and system for three-dimensional virtual world pattern positioning. The method includes creating a three-dimensional pattern for a virtual world environment, sub-dividing the pattern into a plurality of sub-divisions each having a vector relative to a center of the pattern, creating a transform including a description of the pattern and shape information for each sub-division, creating a portion of a virtual world environment by positioning the pattern and sub-divisions, and storing the transform for reusing the pattern and sub-divisions in another virtual world environment.
    Type: Grant
    Filed: January 19, 2009
    Date of Patent: May 8, 2012
    Assignee: International Business Machines Corporation
    Inventors: Richard Greene, Conor P. Beverland, Florence Hirondel, Ailun Yi, Tim Kock
  • Publication number: 20120092367
    Abstract: A real world image captured by a real camera such as an outside right imaging unit 23b is acquired, a synthesized image is generated by synthesizing the acquired real world image and a virtual image depicting a first virtual object such as an enemy object EO, in such a manner that the first virtual object such as an enemy object EO appears to be present behind the real world image, and the synthesized image thus generated is displayed on a display device.
    Type: Application
    Filed: September 13, 2011
    Publication date: April 19, 2012
    Applicants: HAL LABORATORY, INC., NINTENDO CO., LTD.
    Inventors: Toshiaki SUZUKI, Shigefumi KAWASE
  • Patent number: 8154548
    Abstract: A measured value of a physical quantity measured using a measuring device within a predetermined region on a real space, and a position where the measuring device performs measurement are acquired (S1001 to S1003). An analytic value of the physical quantity at that position in the predetermined region is calculated (S1004). A first object representing the measured value and a second object representing the analytic value are arranged at a place corresponding to this position on a virtual space having a coordinate system which matches the coordinate system of the real space (S1005, S1006). An image which is seen upon viewing the virtual space on which the first and second objects are arranged from a viewpoint is generated, and the generated image is output (S1008, S1009).
    Type: Grant
    Filed: September 26, 2006
    Date of Patent: April 10, 2012
    Assignee: Canon Kabushiki Kaisha
    Inventor: Tsuyoshi Kuroki
  • Publication number: 20120069050
    Abstract: According to an embodiment of the present invention, a method for detecting an object includes receiving, on a transparent display, object selection information, determining an eye direction associated with the received object selection information, selecting at least one object displayed within a region of the transparent display defined by the received object selection information based on the determined eye direction, acquiring information on the selected object, and displaying the acquired information on the transparent display.
    Type: Application
    Filed: September 14, 2011
    Publication date: March 22, 2012
    Inventors: Heeyeon PARK, Yeonjoo Joo, Sunju Park
  • Publication number: 20120062594
    Abstract: Aspects of the present invention relate to methods and systems for capturing, sharing and recording the information on a collaborative writing surface. According to a first aspect of the present invention, currently persistent collaborative-writing-surface content may be imaged during periods of occlusion by an occluder. According to a second aspect of the present invention, the occluder may be imaged as transparent allowing the visibility of the occluded currently persistent collaborative-writing-surface content. According to a third aspect of the present invention, the occluder may be imaged as a silhouette allowing visibility of the occluded currently persistent collaborative-writing-surface content.
    Type: Application
    Filed: September 15, 2010
    Publication date: March 15, 2012
    Inventor: Richard John Campbell
  • Patent number: 8130242
    Abstract: Systems and methods of interacting with a virtual space, in which a mobile device is used to electronically capture image data of a real-world object, the image data is used to identify information related to the real-world object, and the information is used to interact with software to control at least one of: (a) an aspect of an electronic game; and (b) a second device local to the mobile device. Contemplated systems and methods can be used to gaming, in which the image data can be used to identify a name of the real-world object, to classify the real-world object, identify the real-world object as a player in the game, to identify the real-world object as a goal object or as having some other value in the game, to use the image data to identify the real-world object as a goal object in the game.
    Type: Grant
    Filed: August 25, 2006
    Date of Patent: March 6, 2012
    Assignee: Nant Holdings IP, LLC
    Inventor: Ronald H. Cohen
  • Patent number: 8130243
    Abstract: A degree of overlapping between separately obtained regions of respective parts of an image is determined. When the degree of overlapping is greater than a defined value, only one of the obtained regions, or both the regions, or a region including both the regions is selected. When the degree of overlapping is less than the defined value, the separately obtained regions of respective parts of the image are separately selected. The image is displayed while reflecting a result of the selection.
    Type: Grant
    Filed: July 2, 2008
    Date of Patent: March 6, 2012
    Assignee: Canon Kabushiki Kaisha
    Inventors: Koji Sudo, Chiyumi Niwa, Nobukazu Yoshida
  • Publication number: 20120050323
    Abstract: A solution for managing a videoconference is provided. Multiple virtual backgrounds can be stored, and a virtual background can be selected to be used for a first participant when he/she is conducting a videoconference with a second participant. The virtual background can be selected based on one or more attributes of the first and/or second participant, one or more attributes of the videoconference, and/or the like. The virtual backgrounds can be utilized, for example, to provide a desired perception, message, and/or the like, of a business entity to individuals outside of the business entity that are interacting with its personnel via videoconferencing.
    Type: Application
    Filed: August 25, 2010
    Publication date: March 1, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Rudolph C. Baron, JR., Andrew R. Jones, Michael L. Massimi, Kevin C. McConnell
  • Publication number: 20120038667
    Abstract: Embodiments of the invention generally relate to replicating changes between corresponding real objects and virtual objects in a virtual world. Embodiments of the invention may include receiving a request to generate a virtual item in a virtual world based on a real-world object, generating the virtual item, synchronizing the virtual item and real-world object, and sharing the virtual item with a second avatar in the virtual world.
    Type: Application
    Filed: August 11, 2010
    Publication date: February 16, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael J. Branson, Gregory R. Hintermeister
  • Patent number: 8103125
    Abstract: In an embodiment, a request is received that includes a specification of a static location and a dynamic location. A static image is created that includes a map that represents an area centered around the static location. A dynamic image is created asynchronously from the creation of the static image. An amalgamated image is generated that includes the static image and the dynamic image, which is over a portion of the static image. In this way, spatial data may be drawn in a manner that increases performance.
    Type: Grant
    Filed: March 13, 2007
    Date of Patent: January 24, 2012
    Assignee: International Business Machines Corporation
    Inventor: Maykel Martin
  • Patent number: 8099462
    Abstract: To allow for greater interactivity in video chat environments, displaying multiple effects from a sender of a first electronic device to a chat client of a second electronic device on a video chat region starts with the sender selecting an effect. If the selected effect has a predetermined mapping interactive effect, the interactive effect is displayed on the chat client. Then, the sender applies the effect or the interactive effect to an application region of the video chat region of the chat client. The application region is a partial or an entire region of the video chat region.
    Type: Grant
    Filed: April 28, 2008
    Date of Patent: January 17, 2012
    Assignee: CyberLink Corp.
    Inventors: Dueg-Uei Sheng, Teng-Yuan Hsiao
  • Patent number: 8085990
    Abstract: The claimed subject matter relates to a computer-implemented architecture that can generate a map. The map can be a hybrid between an orthographic projection map and street-side images, thus including useful aspects from both types of representations. For example, an orthographic projection map is very effective at presenting global relationships among the features of the map but not effective at presenting local detail. In contrast, street -side images show excellent detail but do not convey the global information of an orthographic projection map. The hybrid map can thus provide a richer set of information than conventional maps and can also display objects/features of the hybrid map in multiple perspectives simultaneously on a single representation that is printable.
    Type: Grant
    Filed: October 8, 2010
    Date of Patent: December 27, 2011
    Assignee: Microsoft Corporation
    Inventor: Eyal Ofek
  • Patent number: 8085267
    Abstract: According to embodiments of the invention, rays may be stochastically culled before they are issued into the three-dimensional scene. Stochastically culling rays may reduce the number of rays which need to be traced by the image processing system. Furthermore, by stochastically culling rays before they are issued into the three-dimensional scene, minor imperfections may be added to the final rendered image, thereby improving the realism of the rendered image. Therefore, stochastic culling of rays may improve the performance of the image processing system by reducing workload imposed on the image processing system and improving the realism of the images rendered by the image processing system. According to another embodiment of the invention, the realism of images rendered by the image processing system may also be improved by stochastically adding secondary rays after ray-primitive intersections have occurred.
    Type: Grant
    Filed: January 30, 2007
    Date of Patent: December 27, 2011
    Assignee: International Business Machines Corporation
    Inventors: Jeffrey Douglas Brown, Russell Dean Hoover, Eric Oliver Mejdrich
  • Publication number: 20110304646
    Abstract: A first image processing apparatus displays markers on a monitor to thereby make a second image processing apparatus perform a display control of a second object on an imaged image of an LCD while the second image processing apparatus transmits a marker recognizing signal when the display control is performed based on the markers to thereby make the first image processing apparatus perform a display control of a first object on the monitor.
    Type: Application
    Filed: August 27, 2010
    Publication date: December 15, 2011
    Applicant: NINTENDO CO., LTD.
    Inventor: Shunsaku KATO
  • Publication number: 20110298823
    Abstract: An information processing section of a game apparatus executes a program including: a step of obtaining an image captured by an outer camera; a step of calculating, when detection of a marker is completed, a position and an orientation of a virtual camera based on a result of the marker detection; a step of obtaining hand-drawn data; a step of capturing, with the virtual camera, a fundamental polygon to which a texture is applied to generate a hand-drawn image, and displaying, on an upper LCD, an image in which the hand-drawn image is superimposed on the camera image; and a step of displaying a hand-drawn input image on a lower LCD.
    Type: Application
    Filed: August 24, 2010
    Publication date: December 8, 2011
    Applicant: NINTENDO CO., LTD.
    Inventor: Shinji KITAHARA
  • Patent number: 8072470
    Abstract: An invention is provided for affording a real-time three-dimensional interactive environment using a three-dimensional camera. The invention includes obtaining two-dimensional data values for a plurality of pixels representing a physical scene, and obtaining a depth value for each pixel of the plurality of pixels using a depth sensing device. Each depth value indicates a distance from a physical object in the physical scene to the depth sensing device. At least one computer-generated virtual object is inserted into the scene, and an interaction between a physical object in the scene and the virtual object is detected based on coordinates of the virtual object and the obtained depth values.
    Type: Grant
    Filed: May 29, 2003
    Date of Patent: December 6, 2011
    Assignee: Sony Computer Entertainment Inc.
    Inventor: Richard Marks
  • Publication number: 20110292076
    Abstract: An apparatus for enabling provision of a localized virtual reality environment may include at least one processor and at least one memory including computer program code. The at least one memory and the computer program code may be configured, with the processor, to cause the apparatus to perform at least receiving information indicative of a current location of a mobile terminal, receiving information indicative of an orientation of the mobile terminal with respect to the current location, causing a stored image including a panoramic view of the current location to be displayed at the mobile terminal based on the orientation, and enabling provision of a virtual object on the panoramic view. A corresponding method and computer program product are also provided.
    Type: Application
    Filed: May 28, 2010
    Publication date: December 1, 2011
    Inventors: Jason Robert Wither, Ronald Azuma
  • Publication number: 20110292078
    Abstract: A handheld display device for displaying an image of a physical page relative to which the device is positioned. The device includes: an image sensor for capturing an image of the physical page; a transceiver for receiving a page description corresponding to a page identity of the physical page; and a processor configured for: rendering a page image based on the received page description; estimating a first pose of the device relative to the physical page; estimating a second pose of the device relative to a user's viewpoint; and determining a projected page image using the rendered page image, the first pose and the second pose; and a display screen for displaying the projected page image. The display screen provides a virtual transparent viewport onto the physical page irrespective of a position and orientation of said device relative to said physical page.
    Type: Application
    Filed: March 18, 2011
    Publication date: December 1, 2011
    Inventors: Paul Lapstun, Kia Silverbrook, Robert Dugald Gates
  • Publication number: 20110292079
    Abstract: A parking assist apparatus includes a display portion mounted at a vehicle and displaying a parking assist image in which an estimated course line generated in association with an operation of a steering wheel is superimposed on a captured image of a surrounding of the vehicle, and an auxiliary image controlling portion displaying an auxiliary image in place of the parking assist image in a state where the parking assist image is displayed on the display portion, the auxiliary image including a virtual vehicle representing the vehicle and a virtual estimated course line corresponding to the estimated course line for the virtual vehicle.
    Type: Application
    Filed: March 28, 2011
    Publication date: December 1, 2011
    Applicant: AISIN SEIKI KABUSHIKI KAISHA
    Inventors: Hiroyasu HOSOI, Noboru NAGAMINE, Koichiro HONDA, Keigo IKEDA
  • Publication number: 20110292077
    Abstract: A method of displaying an image of a physical page relative to which a handheld display device is positioned. The method includes the steps of: capturing an image of the physical page using an image sensor of the device; determining a page identity for the physical page; retrieving a page description corresponding to the page identity; rendering a page image based on the retrieved page description; estimating a first pose of the device relative to the physical page; estimating a second pose of the device relative to a user's viewpoint; determining a projected page image for display by the device; and displaying said projected page image on a display screen of said device. The display screen provides a virtual transparent viewport onto the physical page irrespective of a position and orientation of the device relative to the physical page.
    Type: Application
    Filed: March 18, 2011
    Publication date: December 1, 2011
    Inventors: Paul Lapstun, Kia Silverbrook, Robert Dugald Gates
  • Patent number: 8063915
    Abstract: A 3D surface wound, injury, and personal protective equipment (PPE) data entry system provides an easily usable graphical user interface through which an examiner can objectively record data relating to surface wounds and injuries sustained by a subject human, as well as PPE used when the wounds/injuries were sustained. The system includes a 3D human model onto which the examiner draws the surface wound(s) and/or damage to the PPE. The subject human's record is stored in a database of similar records. The database records comprise quantifiable, objective data that is easily compared and analyzed. An analysis tool can aggregate a selected population of human subjects within the database to create wound density information that can be statistically analyzed and/or displayed on a standard 3D human model. Such objective wound density information may facilitate improved medical and/or tactical training, and improved PPE design.
    Type: Grant
    Filed: June 1, 2007
    Date of Patent: November 22, 2011
    Assignee: Simquest LLC
    Inventors: Howard Champion, Paul Sherman, Mary M. Lawnick, Paul M. Cashman, Harald Scheirich, Timothy Patrick Kelliher
  • Patent number: 8045825
    Abstract: A left-eye color discrimination unit (1101) and right-eye color discrimination unit (1102) generate mask images from virtual space images. If an error part exists at the boundary between a chroma key region and non-chroma key region in the mask image, each of a left-eye mask correction unit (1108) and right-eye mask correction unit (1110) corrects the error part using another mask image generated based on the other virtual space image in addition to the virtual space image.
    Type: Grant
    Filed: April 24, 2008
    Date of Patent: October 25, 2011
    Assignee: Canon Kabushiki Kaisha
    Inventors: Tomohiko Shimoyama, Takuya Tsujimoto, Tomohiko Takayama
  • Patent number: 8033914
    Abstract: A game apparatus includes an LCD and is provided a touch panel in relation to the LCD. The LCD displays a game screen for making a player character hit a ball. For example, when touch-on is performed on the LCD (touch panel), the stance and shot power of the player character are decided according to coordinates of the touch-on position. With this, the path of the ball is decided to be a straight ball, draw ball or fade ball, and the carry of the ball is decided with regard to the shot power. Following the touch-on, when a slide operation is performed, an impact is decided according to the slide operation. For example, the path of the ball is changed by the decided impact.
    Type: Grant
    Filed: September 28, 2005
    Date of Patent: October 11, 2011
    Assignee: Nintendo Co., Ltd.
    Inventors: Yasuo Yoshikawa, Takahiro Harada, Toyokazu Nonaka
  • Patent number: 8031210
    Abstract: A method of rendering a computer generated 3D scene integrated with a base image, the method comprising loading a base image, such as a photograph, and a computer generated 3D scene or model. The base image is displayed on a monitor, and in one embodiment the calibration of the camera which generated the base image is determined. The 3D model is rendered as an overlay of the base image responsive thereto. In another embodiment, the camera calibration of the 3D scene is made consonant with the base image by selecting corresponding points. The base image is then additionally displayed at a predetermined transparency as an overlay of the 3D model. A user then selects pixels of the base image overlay for placement in the foreground. The selected pixels are then displayed without transparency and the balance of the base image overlay removed rendering the integrated image.
    Type: Grant
    Filed: September 30, 2007
    Date of Patent: October 4, 2011
    Assignee: RDV Systems Ltd.
    Inventors: Nathan Elsberg, Alex Hazanov
  • Publication number: 20110234631
    Abstract: Apparatuses and techniques relating to an augmented reality (AR) device are provided. The device for augmenting a real-world image includes a light source information generating unit that generates light source information for a real-world image captured by a real-world image capturing device based on the location, the time, and the date the real-world image was captured. The light source information includes information on the position of a real-world light source for the real-world image. The device further includes a shadow image registration unit that receives the light source information generated from the light source information generating unit. The shadow image registration unit generates a shadow image of a virtual object overlaid onto the real-world image based on the light source information generated from the light source information generating unit.
    Type: Application
    Filed: March 25, 2010
    Publication date: September 29, 2011
    Applicant: BIZMODELINE CO., LTD.
    Inventors: Jae-Hyung KIM, Jong-Cheol HONG, Jong-Min YOON, Ho-Jong JUNG
  • Patent number: 8026931
    Abstract: Digital video effects are described. In one aspect, a foreground object in a video stream is identified. The video stream comprises multiple image frames. The foreground object is modified by rendering a 3-dimensional (3-D) visual feature over the foreground object for presentation to a user in a modified video stream. Pose of the foreground object is tracked in 3-D space across respective ones of the image frames to identify when the foreground object changes position in respective ones of the image frames. Based on this pose tracking, aspect ratio of the 3-D visual feature is adaptively modified and rendered over the foreground object in corresponding image frames for presentation to the user in the modified video stream.
    Type: Grant
    Filed: August 28, 2006
    Date of Patent: September 27, 2011
    Assignee: Microsoft Corporation
    Inventors: Jian Sun, Qiang Wang, Weiwei Zhang, Xiaoou Tang, Heung-Yeung Shum
  • Publication number: 20110227944
    Abstract: A vehicle display system displays enhanced vision (EV) and captured images, for example synthetic vision (SV) images, to an operator of a vehicle. The display system includes an EV vision system for generating EV images, an SV database containing information regarding terrain and objects of interest for a travel path of a vehicle, an SV system for generating SV images based on travel of the vehicle and information from the SV database, a processor for filtering the EV images and merging the filtered EV image with the SV image, and a display for displaying the merged SV and filtered EV images.
    Type: Application
    Filed: March 16, 2010
    Publication date: September 22, 2011
    Applicant: HONEYWELL INTERNATIONAL INC.
    Inventors: Thea L. Feyereisen, John G. Suddreth, Troy Nichols
  • Patent number: 8022967
    Abstract: An image processing method includes the steps of acquiring an image of a physical space, acquiring a position and orientation of a viewpoint of the image, generating an image of a virtual object, detecting an area which consists of pixels each having a predetermined pixel value, and superimposing the image of the virtual object on the image of the physical space. The superimposition step includes calculating a distance between a position of the virtual object and a position of the viewpoint, acquiring an instruction indicating whether or not the virtual object is emphasis-displayed, and setting a flag indicating whether or not the image of the virtual object is to be set as a masked target. The masking process image of the virtual object is superimposed or not on the image of the physical space depending if the image of the virtual object is set as the masked target.
    Type: Grant
    Filed: June 1, 2005
    Date of Patent: September 20, 2011
    Assignee: Canon Kabushiki Kaisha
    Inventors: Yasuhiro Okuno, Toshikazu Ohshima, Kaname Tanimura
  • Patent number: 8018471
    Abstract: Various technologies and techniques are disclosed that merge components on a design surface. The system receives input from a user to add components or clauses to a design surface and analyzes the components to determine if some of the components can be merged. If the system identifies components that can be merged, then the system merges the identified components to reduce the number of components present on the design surface. The system determines that some components can be merged if the components meet the same criteria, such as having components that are equivalent and that have the same number of incoming paths or the same number of outgoing paths. The system provides a visual indicator on the design surface to indicate that components are being merged. The system provides an undo feature to allow the user to undo the component merging when desired.
    Type: Grant
    Filed: May 15, 2006
    Date of Patent: September 13, 2011
    Assignee: Microsoft Corporation
    Inventors: Nagalinga Durga Prasad Sripathi Panditharadhya, John Edward Churchill, Udaya Kumar Bhaskara
  • Patent number: 8009899
    Abstract: Image filling methods. A plurality of images corresponding to a target object or a scene are captured at various angles. An epipolar geometry relationship between a filling source image and a specific image within the images is calculated. The filling source image and the specific image are rectified according to the epipolar geometry relationship. At least one filling target area in the rectified specific image is patched according to the rectified filling source image.
    Type: Grant
    Filed: June 22, 2007
    Date of Patent: August 30, 2011
    Assignee: Industrial Technology Research Institute
    Inventors: Chia-Chen Chen, Cheng-Yuan Tang, Yi-Leh Wu, Chi-Tsung Liu
  • Patent number: RE43216
    Abstract: Smooth, stable and high quality game image is provided by accurately pre-reading background data required for image processing each time. The game device therefore reads background data required for a game that displays a condition of a moving vehicle within a virtual three-dimensional space together with a background in the main memory from a CD-ROM (recording medium) prior to image processing. This device comprises a pre-reading unit for pre-reading background data from a recording medium when reading a start line (reference line) set at a distant position in a specified distance away from the limit line of the visual field direction of display is crossing a new area. A recording medium is a medium that records background data by dividing it into a plurality of areas in advance, and the pre-reading unit comprises a unit for judging on which of the areas the reference line is crossing, and a reading unit for reading in memory the background data of the area judged as being crossed with the reference line.
    Type: Grant
    Filed: February 7, 2008
    Date of Patent: February 28, 2012
    Assignee: Kabushiki Kaisha Sega
    Inventor: Masaaki Ito