Placing Generated Data In Real Scene Patents (Class 345/632)
  • Patent number: 9001152
    Abstract: An information processing apparatus including an imaged image input unit inputting an imaged image of a facility imaged in an imaging device to a display control unit, a measurement information input unit inputting measurement information measured by a sensor provided in the facility from the sensor to a creation unit, a creation unit creating a virtual image representing a status of an outside or inside of the facility based on the measurement information input by the measurement information input unit, and a display control unit overlaying and displaying the virtual image created in the creation unit and the imaged image input by the imaged image input unit on a display device.
    Type: Grant
    Filed: July 6, 2012
    Date of Patent: April 7, 2015
    Assignee: NS Solutions Corporation
    Inventors: Noboru Ihara, Kazuhiro Sasao, Masaru Yokoyama, Arata Sakurai, Ricardo Musashi Okamoto
  • Patent number: 8994751
    Abstract: A method, system and computer program product for placing an image of an object on an image of user is provided. First, image boundaries are detected in the image of the user and converted into a set of line segments. A pair of line segments is evaluated according to a function that combines subscores of the pair of line segments to produce a score. The subscores of the line segments are computed based on various properties such as orientation difference, extent, proximity to the center of the image, bilateral symmetry, and the number of skin-colored pixels. A pair of line segments with the highest score is chosen as the boundaries for the image of the user and is used to determine the position, orientation, and extent of the object. The image of the object is then transformed according to the determined parameters and combined with the image of the user to produce the desired result.
    Type: Grant
    Filed: January 18, 2013
    Date of Patent: March 31, 2015
    Assignee: A9.com, Inc.
    Inventors: Mark A. Ruzon, Dmitriy Shirchenko
  • Patent number: 8988463
    Abstract: A method for overlaying first and second images in a common focal plane of a viewer comprises forming the first image and guiding the first and second images along an axis to a pupil of the viewer. The method further comprises adjustably diverging the first and second images at an adaptive diverging optic to bring the first image into focus at the common focal plane, and, adjustably converging the second image at an adaptive converging optic to bring the second image into focus at the common focal plane.
    Type: Grant
    Filed: December 8, 2010
    Date of Patent: March 24, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kathryn Stone Perez, Alex Aben-Athar Kipman, Andrew Fuller, Philip Greenhalgh, David Hayes, John Tardif
  • Patent number: 8988455
    Abstract: A reference character is generated by combining part objects prepared for each site. At least one candidate character object is generated by changing at least one part object among the part objects used in the reference character object. The reference character object and the candidate character object are displayed by a display device, and an input for selection from a user is accepted. Next, a site for which different part objects are used between the selected character object and the reference character object is determined. A new character object is generated by changing the part object used for the determined site with priority. The selected character object is displayed as the reference character, and the newly generated character is displayed as the candidate character.
    Type: Grant
    Filed: March 4, 2010
    Date of Patent: March 24, 2015
    Assignee: Nintendo Co., Ltd.
    Inventors: Ryutaro Takahashi, Takafumi Masaoka
  • Patent number: 8988465
    Abstract: A virtual environment, including at least one virtual element representing a component of an item is generated. The virtual environment is mapped to a physical environment that includes a physical mockup of at least a subset of the item. The virtual environment is provided to a display. The at least one virtual element is displayed in relation to the physical element according to the mapping.
    Type: Grant
    Filed: March 30, 2012
    Date of Patent: March 24, 2015
    Assignee: Ford Global Technologies, LLC
    Inventors: Elizabeth S. Baron, Richard T. Jakacki, Daniel H. Orr, James Stamper, Jr., David Canfield
  • Patent number: 8982155
    Abstract: An Augmented Reality (AR) providing apparatus sends to a server apparatus a request, including image information from an imaging device, for obtaining product information indicating a product that can be displayed on a shelf. and the AR apparatus displays product information included in a reply from the server apparatus in response to the request in an overlaying image manner. The server apparatus determines a shelf from the image information included in the request, determines a size of an empty shelf space, and selects product information of products smaller than the determined size of the empty shelf space. The product information is selected from a storage device storing multiple sets of product information indicating a product and its associated size information. The server apparatus sends a reply including the selected product information to the AR providing apparatus.
    Type: Grant
    Filed: August 18, 2011
    Date of Patent: March 17, 2015
    Assignee: NS Solutions Corporation
    Inventors: Tetsuji Fukushima, Shigeo Kuwabara, Satoshi Yokoi, Noboru Ihara
  • Publication number: 20150070385
    Abstract: A tomogram of an object is acquired. A place in a tomogram which corresponds to a portion spaced apart from a reference point in the object by a predetermined distance is specified. A composite image is generated by combining the tomogram with information indicating the specified place. The composite image is output.
    Type: Application
    Filed: September 4, 2014
    Publication date: March 12, 2015
    Inventors: Koichi Ishizu, Takayuki Ueno, Takuya Ishida, Takaaki Endo, Kiyohide Satoh
  • Patent number: 8977250
    Abstract: A processing device local context is determined, and a communication of the processing device is filtered at least in part according to the local context.
    Type: Grant
    Filed: August 27, 2004
    Date of Patent: March 10, 2015
    Assignee: The Invention Science Fund I, LLC
    Inventors: Mark A. Malamud, Paul G. Allen, Royce A. Levien, John D. Rinaldo, Jr., Edward K. Y. Jung
  • Patent number: 8977074
    Abstract: Photographic images can be used to enhance three-dimensional (3D) virtual models of a physical location. In an embodiment, a method of generating a 3D scene geometry includes obtaining a first plurality of images and corresponding distance measurements for a first vehicle trajectory; obtaining a second plurality of images and corresponding distance measurements for a second vehicle trajectory, the second vehicle trajectory intersecting the first vehicle trajectory; registering a relative vehicle position and orientation for one or more segments of each of a first vehicle trajectory and a second vehicle trajectory; generating a three-dimensional geometry for each vehicle trajectory; mapping the three-dimensional geometries for each vehicle trajectory onto a common reference system based on the registering; and merging the three-dimensional geometries from both trajectories to generate a complete scene geometry.
    Type: Grant
    Filed: September 28, 2011
    Date of Patent: March 10, 2015
    Assignee: Google Inc.
    Inventors: Jesse Berent, Daniel Filip, Luciano Sbaiz
  • Publication number: 20150062147
    Abstract: Embodiments of the present invention disclose a display method for background of application program and a mobile terminal. The method includes the following steps: capturing image information, where the image information includes current environment information; separating a background displaying layer of a currently running application program; adding the image information to the background displaying layer of the currently running application program; and displaying the image information as a background of the currently running application program. According to the present invention, image information captured by using a camera may be used as a background of an application program, so that users can use an application program of a mobile terminal conveniently when walking.
    Type: Application
    Filed: October 23, 2013
    Publication date: March 5, 2015
    Applicant: DONGGUAN GOLDEX COMMUNICATION TECHNOLOGY CO., LTD.
    Inventor: Lirong Liu
  • Patent number: 8970623
    Abstract: An example information processing system which includes a plurality of information processing devices, the respective information processing devices carrying out imaging by an imaging device, wherein the respective information processing devices include: an imaging processing unit to generate a captured image by sequentially capturing images of a real space; a virtual space setting unit to set a virtual space commonly used by another information processing device which captures an image of one of an imaging object that is included in a captured image, and an imaging object, at least a portion of external appearance of which matches the imaging object, based on at least the portion of the imaging object included in the captured image; and a transmission unit to send data relating to change in a state of the virtual space, to the other information processing device, when the change in the state of the virtual space is detected.
    Type: Grant
    Filed: February 22, 2012
    Date of Patent: March 3, 2015
    Assignee: Nintendo Co., Ltd.
    Inventor: Takeshi Hayakawa
  • Patent number: 8963952
    Abstract: A display control system includes: a display information acquisition section that acquires display information by using given account information; and a corrected display information creation section that, based on first display information acquired by the display information acquisition section using first account information and second display information acquired by the display information acquisition section using second account information different from the first account information, determines whether the display contents shown by the first display information are included in display contents shown by the second display information or not, selects part or all of the display contents shown by the first display information in accordance with a result of the determination, and creates corrected display information which includes the selected part of the display contents shown by the first display information.
    Type: Grant
    Filed: July 12, 2010
    Date of Patent: February 24, 2015
    Assignee: Fuji Xerox Co., Ltd.
    Inventor: Yuki Nakamori
  • Patent number: 8963954
    Abstract: An apparatus for providing a constant level of information in an augmented reality environment may include a processor and memory storing executable computer program code that cause the apparatus to at least perform operations including determining a first number of points of interest associated with a first set of real world objects of a current location(s). The first set of real world objects is currently displayed. The computer program code may further cause the apparatus to determine whether the first number is below a predetermined threshold and may increase a view range of a device to display a second set of real world objects. The view range may be increased in order to increase the first number to a second number of points of interest that corresponds to the threshold, based on determining that the first number is below the threshold. Corresponding methods and computer program products are also provided.
    Type: Grant
    Filed: June 30, 2010
    Date of Patent: February 24, 2015
    Assignee: Nokia Corporation
    Inventor: Jesper Sandberg
  • Patent number: 8963951
    Abstract: To allow a viewer to easily understand the details of a moving image shot by an image capturing apparatus in the case where the moving image is browsed. A camerawork detecting unit 120 detects the amount of movement of an image capturing apparatus at the time of shooting a moving image input from a moving-image input unit 110, and, on the basis of the amount of movement of the image capturing apparatus, calculates affine transformation parameters for transforming an image on a frame-by-frame basis. An image transforming unit 160 performs an affine transformation of at least one of the captured image and a history image held in an image memory 170, on the basis of the calculated affine transformation parameters. An image combining unit 180 combines, on a frame-by-frame basis, the captured image and the history image, at least one of which has been transformed, and causes the image memory 170 to hold a composite image.
    Type: Grant
    Filed: August 22, 2008
    Date of Patent: February 24, 2015
    Assignee: Sony Corporation
    Inventor: Shingo Tsurumi
  • Publication number: 20150049111
    Abstract: A makeup application assistance device includes: an image acquisition unit that acquires an image obtained by photographing a face; a face part acquisition unit that acquires, from the image, face part regions of the face; a face makeup technique selection unit that selects a face makeup technique, which is a face makeup application method, for each face part; a skin condition acquisition unit that acquires the skin condition of the face; a skin correction makeup technique selection unit that selects, on the basis of the skin condition, a skin correction makeup technique, which is a skin correction makeup application method; and a makeup technique presentation unit that presents the selected face makeup techniques associated with the corresponding face part regions, and presents the selected skin correction makeup technique, to a user.
    Type: Application
    Filed: January 20, 2014
    Publication date: February 19, 2015
    Applicant: PANASONIC CORPORATION
    Inventors: Tomofumi Yamanashi, Rieko Asai, Aoi Muta, Chie Nishi, Kaori Ajiki
  • Patent number: 8957917
    Abstract: Methods for managing errors utilizing augmented reality are provided. One system includes a transceiver configured to communicate with a systems management console, capture device for capturing environmental inputs, memory storing code comprising an augmented reality module, and a processor. The processor, when executing the code comprising the augmented reality module, is configured to perform the method below. One method includes capturing an environmental input, identifying a target device in the captured environmental input, and querying the systems management console regarding a status condition for the target device. Also provided are physical computer storage mediums including a computer program product for performing the above method.
    Type: Grant
    Filed: June 20, 2012
    Date of Patent: February 17, 2015
    Assignee: International Business Machines Corporation
    Inventors: Timothy A. Meserth, Mark E. Molander, David T. Windell
  • Publication number: 20150042677
    Abstract: In an image-generating apparatus, a detecting section detects a time variation state in a physical state of a target region with respect to a dynamic image (frame image) in which the target region is chronologically captured; a diagnosis assisting information generating section carries out analysis based on the time variation in the physical state of the target region and generates the analysis result as diagnosis assisting information. The analysis results are two pieces of diagnosis information including, for example, exhalation and inhalation. A display image generating section generates a displaying image for displaying the dynamic image and the diagnosis assisting information. The displaying image is an image including a dynamic image display portion that displays the dynamic image, and a summary display portion (color bar, etc.) that displays a first analysis result and a second analysis result of the diagnosis assisting information so as to be distinguishable at a glance in the time axis direction.
    Type: Application
    Filed: March 12, 2013
    Publication date: February 12, 2015
    Applicant: KONICA MINOLTA, INC.
    Inventors: Kenta Shimamura, Hiroshi Yamato, Osamu Toyama
  • Patent number: 8952986
    Abstract: Systems and methods for planning and optimizing bone deformity correction treatments using external fixators. A computer system generates a display of a tiltable ellipse superimposed on digital medical image(s) (radiograph), the ellipse representing a ring of an external fixator attachable to the patient's bone. Based on axial and azimuthal ring rotation user input, the system calculates a 3D position of the resulting graphical representation of the ring. User input controls translation of ring(s). Strut position user input identifies 3D positions for the external fixator struts. Based on graphical input defining 3D biological rate-limiting points for treatment, the system calculates a 3D bone correction speed and/or a number of treatment days, and generates a graphical simulation of this treatment. Further, the system generates a correction plan specifying for each strut a daily sequence of strut lengths and preferred strut sizes, to minimize strut replacements.
    Type: Grant
    Filed: January 27, 2014
    Date of Patent: February 10, 2015
    Assignee: Orthohub, Inc.
    Inventor: Andrew Haskell
  • Patent number: 8947423
    Abstract: An interactive 3-D drawing method supports 3-D modeling of real-world scenes captured in the form of multiple images taken from different locations and angles. The method enables the user manipulate a 3-D drawing primitive without changing its appearance on a selected input image.
    Type: Grant
    Filed: November 19, 2010
    Date of Patent: February 3, 2015
    Assignee: Ocali Bilisim Teknolojileri Yazilim Donanim San. TIC. A.S.
    Inventors: Ogan Ocali, Ali Erol, Umut Sezen
  • Patent number: 8947456
    Abstract: Systems and methods for processing materials for a recycling workstream are disclosed. The system may include one or more sorting surfaces on which sortable items may be placed. Illumination sources may be provided to illuminate both the items and the sorting surface(s). A variety of sensor systems may also be provided. The outputs of the sensor systems may be supplied to a computing system for determining the composition of the items and their location on the sorting surface(s). The computing system may also control the surface(s), illumination sources, and sensor systems. Additionally, the system may include one or more augmented reality interface devices used by sorters at the sorting facility. The computing system may communicate data streams to the augmented reality interfaces to provide the users augmented reality sensations. The sensations may give the users information and instructions regarding how to sort the items into one or more sorting bins.
    Type: Grant
    Filed: March 22, 2012
    Date of Patent: February 3, 2015
    Assignee: Empire Technology Development LLC
    Inventors: Sung-Wei Chen, Christopher J. Rothfuss
  • Patent number: 8941689
    Abstract: Computationally implemented methods and systems include presenting a first augmented view of a first scene from a real environment, the first augmented view to be presented including one or more persistent augmentations in a first one or more formats, the inclusion of the one or more persistent augmentations in the first augmented view being independent of presence of one or more visual cues in the actual view of the first scene from the real environment, obtaining an actual view of a second scene from the real environment that is different from the actual view of the first scene, and presenting a second augmented view of the second scene from the real environment, the second augmented view to be presented including the one or more persistent augmentations in a second one or more formats that is based, at least in part, on multiple input factors.
    Type: Grant
    Filed: October 9, 2012
    Date of Patent: January 27, 2015
    Assignee: Elwha LLC
    Inventors: Gene Fein, Royce A. Levien, Richard T. Lord, Robert W. Lord, Mark A. Malamud, John D. Rinaldo, Jr., Clarence T. Tegreene
  • Patent number: 8941685
    Abstract: Methods and systems for providing a graphic, such as an advertisement, in a 3D geographical information system (GIS) view are described. A method for providing a graphic in a 3D GIS view may include obtaining a graphic and determining a 3D geographical space in the GIS view based on a geographical reference in the GIS view. The method may also include rendering and displaying a curvilinear representation of the graphic in the geographical space. The method may further include adjusting the curvilinear representation of the graphic according to an updated viewpoint of the GIS view. The curvilinear representation may be oriented directly towards the updated viewpoint. A system for providing a graphic in a 3D GIS view may include a geographical space manager, a graphic representation generator and a display module.
    Type: Grant
    Filed: March 8, 2011
    Date of Patent: January 27, 2015
    Assignee: Google Inc.
    Inventors: Charles Chapin, Gokul Varadhan
  • Publication number: 20150022550
    Abstract: Embodiments of the present disclosure can be used to generate an image replica of a person wearing various outfits to help the person visualize how clothes and accessories will look without actually having to try them on. Images can be generated from various angles to provide the person an experience as close as possible to actually wearing the clothes, accessories and looking at themselves in the mirror. Among other things, embodiments of the present disclosure can help remove much of the current uncertainty involved in buying clothing and accessories online.
    Type: Application
    Filed: July 22, 2014
    Publication date: January 22, 2015
    Applicant: TRUPIK, INC.
    Inventors: Vikranth Katpally Reddy, Sridhar Tirumala, Aravind Inumpudi, David Joseph Harmon
  • Publication number: 20150015605
    Abstract: User interface systems and methods for roof estimation are described. Example embodiments include a roof estimation system that provides a user interface configured to facilitate roof model generation based on one or more aerial images of a building roof. In one embodiment, roof model generation includes image registration, image lean correction, roof section pitch determination, wire frame model construction, and/or roof model review. The described user interface provides user interface controls that may be manipulated by an operator to perform at least some of the functions of roof model generation. The user interface is further configured to concurrently display roof features onto multiple images of a roof. This abstract is provided to comply with rules requiring an abstract, and it is submitted with the intention that it will not be used to interpret or limit the scope or meaning of the claims.
    Type: Application
    Filed: August 1, 2014
    Publication date: January 15, 2015
    Inventor: Chris Pershing
  • Publication number: 20150015606
    Abstract: An approach that detects locations of hazardous conditions within an infrastructure is provided. This approach uses satellite imagery, GIS data, automatic image processing, and predictive modeling to determine the location of the hazards automatically, thus optimizing infrastructure management. Specifically, a hazard detection tool provides this capability. The hazard detection tool comprises a detection component configured to: receive visual media containing asset location data about a set of physical assets, and hazard location data about potential hazards within a vicinity of each of the set of physical assets. The detection component further receives graphical information system (GIS) data containing asset location data about each of the set of physical assets.
    Type: Application
    Filed: October 2, 2014
    Publication date: January 15, 2015
    Inventors: James R. Culp, Frank D. Fenhagen, IV, Arun Hampapur, Xuan Liu, Sharathchandra U. Pankanti
  • Patent number: 8933798
    Abstract: An electronic display comprises a graphical lap information display portion that comprises a lap time differential indicator and an illuminated portion. The lap time differential indicator is configured to display a differential time value. The illuminated portion is configured to selectively illuminate a portion of the graphical lap information display portion in a plurality of different lighting modes. The illuminated portion is operable in the plurality of different lighting modes in response to a rate of change of the differential time value. A vehicle and method are also provided.
    Type: Grant
    Filed: February 25, 2013
    Date of Patent: January 13, 2015
    Assignee: Honda Motor Co., Ltd.
    Inventor: Nathaniel C. Ellis
  • Patent number: 8933969
    Abstract: Methods for managing errors utilizing augmented reality are provided. One system includes a transceiver configured to communicate with a systems management console, capture device for capturing environmental inputs, memory storing code comprising an augmented reality module, and a processor. The processor, when executing the code comprising the augmented reality module, is configured to perform the method below. One method includes capturing an environmental input, identifying a target device in the captured environmental input, and querying the systems management console regarding a status condition for the target device. Also provided are physical computer storage mediums including a computer program product for performing the above method.
    Type: Grant
    Filed: June 20, 2012
    Date of Patent: January 13, 2015
    Assignee: International Business Machines Corporation
    Inventors: Timothy A. Meserth, Mark E. Molander, David T. Windell
  • Patent number: 8933970
    Abstract: Techniques for controlling an augmented reality object are described in various implementations. In one example implementation, a method may include receiving an initialization image captured by an image capture device, the initialization image depicting a background and being free of foreground objects positioned between the background and the image capture device. The method may also include receiving a plurality of subsequent images captured by the image capture device over a period of time, the plurality of subsequent images depicting the background and a foreground object, the foreground object being positioned between the background and the image capture device. The method may also include comparing the initialization image to the plurality of subsequent images to determine positioning of the foreground object over the period of time. The method may also include controlling an augmented reality object based on the positioning of the foreground object over the period of time.
    Type: Grant
    Filed: September 11, 2012
    Date of Patent: January 13, 2015
    Assignee: Longsand Limited
    Inventors: George Saklatvala, Stephen Christopher Davis, Matthew Dominic Sullivan, Tristan Peter Melen
  • Patent number: 8933961
    Abstract: A video processing system may include a video ingest module for receiving a plurality of georeferenced video feeds each including a sequence of video frames and initial geospatial metadata associated therewith, and each georeferenced video feed having a respective different geospatial accuracy level associated therewith. The system may further include a video processor coupled to the video ingest module and configured to perform image registration among the plurality of georeferenced video feeds, and generate corrected geospatial metadata for at least one of the georeferenced video feeds based upon the initial geospatial metadata, the image registration and the different geospatial accuracy levels.
    Type: Grant
    Filed: December 10, 2009
    Date of Patent: January 13, 2015
    Assignee: Harris Corporation
    Inventors: Robert McDonald, Christopher T. Dunkel, John Heminghous, Aric Peterson, Tariq Bakir
  • Publication number: 20150009232
    Abstract: Systems, methods, and computer-readable media are provided for generating computer-mediated reality display data based on user instantaneous motion data.
    Type: Application
    Filed: July 3, 2014
    Publication date: January 8, 2015
    Inventor: Gary Lauder
  • Patent number: 8928695
    Abstract: Computationally implemented methods and systems include presenting a first augmented view of a first scene from a real environment, the first augmented view to be presented including one or more persistent augmentations in a first one or more formats, the inclusion of the one or more persistent augmentations in the first augmented view being independent of presence of one or more visual cues in the actual view of the first scene from the real environment, obtaining an actual view of a second scene from the real environment that is different from the actual view of the first scene, and presenting a second augmented view of the second scene from the real environment, the second augmented view to be presented including the one or more persistent augmentations in a second one or more formats that is based, at least in part, on multiple input factors.
    Type: Grant
    Filed: October 5, 2012
    Date of Patent: January 6, 2015
    Assignee: Elwha LLC
    Inventors: Gene Fein, Royce A. Levien, Richard T. Lord, Robert W. Lord, Mark A. Malamud, John D. Rinaldo, Jr., Clarence T. Tegreene
  • Patent number: 8924150
    Abstract: A method to navigate a vehicle utilizing a graphic projection display, includes monitoring a navigation status graphic representing a navigation intent displayed upon the graphic projection display, monitoring a user input indicated to a portion of the graphic projection display, initiating a user-defined navigation command based on the monitored navigation status graphic and the monitored user input, and operating the vehicle in accordance with the user-defined navigation command.
    Type: Grant
    Filed: December 29, 2010
    Date of Patent: December 30, 2014
    Assignee: GM Global Technology Operations LLC
    Inventors: Omer Tsimhoni, Joseph F. Szczerba, Thomas A. Seder, Dehua Cui
  • Patent number: 8922487
    Abstract: A mobile device is operative to change from a first operational mode to a second or third operational mode based on a user's natural motion gesture. The first operational mode may include a voice input mode in which a user provides a voice input to the mobile device. After providing the voice input to the mobile device, the user then makes a natural motion gesture and a determination is made as to whether the natural motion gesture places the mobile device in the second or third operational mode. The second operational mode includes an augmented reality display mode in which the mobile device displays images recorded from a camera overlaid with computer-generated images corresponding to results output in response to the voice input. The third operational mode includes a reading display mode in which the mobile device displays, without augmented reality, results output in response to the voice input.
    Type: Grant
    Filed: November 12, 2013
    Date of Patent: December 30, 2014
    Assignee: Google Inc.
    Inventors: Casey Kwok Ching Ho, Sharvil Nanvati
  • Patent number: 8918494
    Abstract: Systems and methods for managing computing systems are provided. One system includes a capture device for capturing environmental inputs, memory storing code comprising a management module, and a processor. The processor, when executing the code comprising the management module, is configured to perform the method below. One method includes capturing an environmental input, identifying a target device in the captured environmental input, and comparing the target device in the captured environmental input to a model of the target device. The method further includes recognizing, in real-time, a status condition of the target device based on the comparison and providing a user with troubleshooting data if the status condition is an error condition. Also provided are physical computer storage mediums including a computer program product for performing the above method.
    Type: Grant
    Filed: April 10, 2014
    Date of Patent: December 23, 2014
    Assignee: International Business Machines Corporation
    Inventor: David T. Windell
  • Patent number: 8913085
    Abstract: Techniques are disclosed that involve mobile augmented reality (MAR) applications in which users (e.g., players) may experience augmented reality (e.g., altered video or audio based on a real environment). Such augmented reality may include various alterations. For example, particular objects may be altered to appear differently. Such alterations may be based on stored profiles and/or user selections. Further features may also be employed. For example, in embodiments, characters and/or other objects may be sent (or caused to appear) to other users in other locations. Also, a user may leave a character at another location and receive an alert when another user/player encounters this character. Also, characteristics of output audio may be affected based on events of the MAR application.
    Type: Grant
    Filed: December 22, 2010
    Date of Patent: December 16, 2014
    Assignee: Intel Corporation
    Inventors: Glen J. Anderson, Subhashini Ganapathy
  • Patent number: 8913057
    Abstract: There is provided an information processing device includes a virtual space recognition unit for analyzing 3D space structure of a real space to recognize a virtual space, a storage unit for storing an object to be arranged in the virtual space, a display unit for displaying the object arranged in the virtual space, on a display device, a detection unit for detecting device information of the display device, and an execution unit for executing predetermined processing toward the object based on the device information.
    Type: Grant
    Filed: February 9, 2011
    Date of Patent: December 16, 2014
    Assignee: Sony Corporation
    Inventors: Hiroyuki Ishige, Kazuhiro Suzuki, Akira Miyashita
  • Patent number: 8913086
    Abstract: Systems for managing errors utilizing augmented reality are provided. One system includes a transceiver configured to communicate with a systems management console, capture device for capturing environmental inputs, memory storing code comprising an augmented reality module, and a processor. The processor, when executing the code comprising the augmented reality module, is configured to perform the method below. One method includes capturing an environmental input, identifying a target device in the captured environmental input, and querying the systems management console regarding a status condition for the target device. Also provided are physical computer storage mediums including a computer program product for performing the above method.
    Type: Grant
    Filed: April 7, 2011
    Date of Patent: December 16, 2014
    Assignee: International Business Machines Corporation
    Inventors: Timothy A. Meserth, Mark E. Molander, David T. Windell
  • Patent number: 8913083
    Abstract: Systems and methods are provided for manually finding a view for a geographic object in a street level image and associating the view with the geographic object. Information related to a geographic object and a first image related to the geographic object is displayed. User inputs indicating a presence of the geographic object in the image and user input indicating a viewpoint within the image are received and processed. An association of the viewpoint, the image and the geographic object is made and the association is stored in a database. A second image is determined, based on the association, as a default initial image to be displayed for the geographic object in a mapping application.
    Type: Grant
    Filed: July 13, 2011
    Date of Patent: December 16, 2014
    Assignee: Google Inc.
    Inventors: Abhijit S. Ogale, Augusto Roman, Owen Brydon
  • Patent number: 8913084
    Abstract: The present disclosure provides a method of simulating an intravascular procedure in a virtual environment. The method includes displaying information from a first view and a second view simultaneously. The first view contains virtual representations of an anatomical region of a human body and an intravascular imaging device disposed in the anatomical region. The second view contains a cross-sectional image of a segment of the anatomical region corresponding to a location of the intravascular imaging device. The method includes moving, in response to a user input, the virtual representation of the intravascular imaging device with respect to the virtual representation of the anatomical region. The method includes updating the cross-sectional image as the virtual representation of the intravascular imaging device is being moved. The updated cross-sectional image corresponds to a new location of the intravascular imaging device.
    Type: Grant
    Filed: December 21, 2012
    Date of Patent: December 16, 2014
    Assignee: Volcano Corporation
    Inventors: Sara Chen, Edrienne Brandon
  • Patent number: 8907982
    Abstract: The mobile device includes a visual input device, for capturing external visual information having real visual background information, and a processing device. The processing device is for associating a selected application with the external visual information, and for executing the selected application based on the external visual information and on user-related input information. The processing device for generating a visual output signal related to at least one virtual visual object in response to the application is further configured to provide the visual output signal to a projector device included within the mobile device such that the projector device will be configured to project said visual output signal related to the at least one virtual visual object onto the visual background, thereby modifying said external visual information.
    Type: Grant
    Filed: November 24, 2009
    Date of Patent: December 9, 2014
    Assignee: Alcatel Lucent
    Inventors: Pascal Zontrop, Marc Bruno Frieda Godon
  • Publication number: 20140354683
    Abstract: An example of an information processing system displays, on a display device, a plurality of panoramic images associated with locations on a map. Here, a predetermined virtual object is set for at least one of the plurality of panoramic images. an information processing system 1 determines one of the locations on the map as a current location based on a user operation. Where the virtual object is set for a panoramic image associated with the current location, the virtual object is placed at a predetermined position of the panoramic image. The information processing system 1 displays, on the display device, an image of a partial area of the panoramic image associated with the current location. A predetermined information process is performed in response to satisfaction of a predetermined condition regarding the virtual object while the panoramic image for which the virtual object is set is displayed.
    Type: Application
    Filed: May 20, 2014
    Publication date: December 4, 2014
    Applicant: NINTENDO CO., LTD.
    Inventors: Toshiaki SUZUKI, Satoshi KIRA, Akihiro UMEHARA
  • Patent number: 8903197
    Abstract: An information providing method includes a recognition step of recognizing an image-capture position, in the real world, at which a captured image was captured; a retrieval step of retrieving information that is associated with the image-capture position, which has been recognized in the recognition step, and the captured image; and a provision step of providing the information, which has been retrieved in the retrieval step, as overlay information that is to be displayed so as to be superimposed on the captured image.
    Type: Grant
    Filed: August 27, 2010
    Date of Patent: December 2, 2014
    Assignee: Sony Corporation
    Inventor: Junichi Rekimoto
  • Patent number: 8896629
    Abstract: The invention relates to a method for ergonomically representing virtual information in a real environment, comprising the following steps: providing at least one view of a real environment and of a system setup for blending in virtual information for superimposing with the real environment in at least part of the view, the system setup comprising at least one display device, ascertaining a position and orientation of at least one part of the system setup relative to at least one component of the real environment, subdividing at least part of the view of the real environment into a plurality of regions comprising a first region and a second region, with objects of the real environment within the first region being placed closer to the system setup than objects of the real environment within the second region, and blending in at least one item of virtual information on the display device in at least part of the view of the real environment, considering the position and orientation of said at least one part of
    Type: Grant
    Filed: August 13, 2010
    Date of Patent: November 25, 2014
    Assignee: Metaio GmbH
    Inventors: Peter Meier, Frank Angermann
  • Patent number: 8896628
    Abstract: There is provided an image processing device including: a data storage unit storing feature data indicating a feature of appearance of one or more physical objects; an environment map building unit for building an environment map based on an input image obtained by imaging a real space and the feature data, the environment map representing a position of a physical object present in the real space; a control unit for acquiring procedure data for a set of procedures of operation to be performed in the real space, the procedure data defining a correspondence between a direction for each procedure and position information designating a position at which the direction is to be displayed; and a superimposing unit for generating an output image by superimposing the direction for each procedure at a position in the input image determined based on the environment map and the position information, using the procedure data.
    Type: Grant
    Filed: January 5, 2011
    Date of Patent: November 25, 2014
    Assignee: Sony Corporation
    Inventors: Yasuhiro Suto, Masaki Fukuchi, Kenichiro Oi, Jingjing Guo, Kouichi Matsuda
  • Patent number: 8872974
    Abstract: In one embodiment, a method for avoiding discomfort and/or relieving motion sickness when using a display device in a moving environment includes detecting at least one movement component of the moving environment, generating data for intermediate images indicating the movement component or at least one of the movement components, and modifying a series of images showing content to be displayed by inserting the intermediate images into the series. The modified series is divided into sequences of these images by the intermediate images. The method further includes displaying the modified series of images on an image display of the display device. The modified series includes the intermediate images.
    Type: Grant
    Filed: March 23, 2012
    Date of Patent: October 28, 2014
    Assignee: Alcatel Lucent
    Inventor: Jan Van Lier
  • Publication number: 20140313049
    Abstract: An athletic performance graphical system that measures and translates raw athletic data to computer interpreted performance data and into visual graphics. Athletic equipment is equipped with a performance measuring sensor. In one embodiment, an event announcer uses information gathered from the performance sensor to better explain the sporting event. In one embodiment, a user uses the computer interpreted performance data to measure the growth or improvement of an athlete human or animal.
    Type: Application
    Filed: March 11, 2014
    Publication date: October 23, 2014
    Inventor: Madison J. Doherty
  • Patent number: 8866848
    Abstract: Background object disposing means (88) disposes a background object (74) representing a background, which is photographed outside a target region (62) of a photographed image (60), on a virtual space (70). Subject object disposing means (90) disposes a subject object (74) between a viewpoint (72) and the background object (76) so that a position at which the subject object (76) is displayed to be superimposed on the background object (74) in a virtual space image (64), and a position of the target region (62) in the photographed image (60), correspond to each other. Composition target object disposing means (92) disposes a composition target object (78) representing a composition target, which is to be displayed to be combined with a real-world space (70) in the virtual space image (64), between the background object (74) and the subject object (76).
    Type: Grant
    Filed: April 25, 2012
    Date of Patent: October 21, 2014
    Assignee: Konami Digital Entertainment Co., Ltd.
    Inventors: Akihiro Ishihara, Kazuhiro Ogawa, Kazuya Matsumoto, Yoshihiko Narita
  • Patent number: 8866811
    Abstract: Position and orientation information of a specific part of an observer is acquired (S403). It is determined whether or not a region of a specific part virtual object that simulates the specific part and that of another virtual object overlap each other on an image of a virtual space after the specific part virtual object is laid out based on the position and orientation information on the virtual space on which one or more virtual objects are laid out (S405). When it is determined that the regions overlap each other, an image of the virtual space on which the other virtual object and the specific part virtual object are laid out is generated; when it is determined that the regions do not overlap each other, an image of the virtual space on which only the other virtual object is laid out is generated (S409).
    Type: Grant
    Filed: October 30, 2008
    Date of Patent: October 21, 2014
    Assignee: Canon Kabushiki Kaisha
    Inventor: Yasuhiro Okuno
  • Patent number: 8866846
    Abstract: An apparatus and a method related to an application of a mobile terminal using an augmented reality technique capture an image of a musical instrument directly drawn/sketched by a user to recognize the particular relevant musical instrument, and provide an effect of playing the musical instrument on the recognized image as if a real instrument were being played. The apparatus preferably includes an image recognizer and a sound source processor. The image recognizer recognizes a musical instrument on an image through a camera. The sound source processor outputs the recognized musical instrument on the image on a display unit to use the same for a play, and matches the musical instrument play on the image to a musical instrument play output on the display unit.
    Type: Grant
    Filed: June 30, 2011
    Date of Patent: October 21, 2014
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Ki-Yeung Kim
  • Publication number: 20140306992
    Abstract: An image processing apparatus includes: an attaching unit that attaches an annotation to a diagnostic image acquired by imaging an object; a recording unit that records, in a storing unit along with an annotation, attribute information which is information on a predetermined attribute, as information related to the annotation; a searching unit that searches a plurality of positions where annotations are attached respectively in the diagnostic image, for a target position which is a position a user has an interest in; and a displaying unit that displays the search result by the searching unit on a display. The searching unit searches for the target position using a word included in the annotation or the attribute information as a key.
    Type: Application
    Filed: December 11, 2012
    Publication date: October 16, 2014
    Inventors: Takuya Tsujimoto, Masanori Sato, Tomohiko Takayama