Placing Generated Data In Real Scene Patents (Class 345/632)
  • Patent number: 8854470
    Abstract: A vision-based augmented reality system using an invisible marker indicates an invisible marker on a target object to be tracked, such that it can rapidly and correctly track the target object by detecting the invisible marker. The augmented reality system includes a target object (TO) including an infrared marker (IM) drawn by an invisible infrared light-emitting material; a visible-ray camera (110) for capturing an image of the TO; an infrared-ray camera (120) for capturing an image of the IM included in the TO image; an optical axis converter for allowing the infrared-ray camera (120) and the visible-ray camera (110) to have the same viewing point; an image processing system (140) for rendering a prepared virtual image to the TO image to generate a new image.
    Type: Grant
    Filed: February 21, 2014
    Date of Patent: October 7, 2014
    Assignee: Industry-University Cooperation Foundation Hanyang University
    Inventors: Jong-Il Park, Han-Hoon Park
  • Patent number: 8854393
    Abstract: There is provided an information processing device includes a virtual space recognition unit for analyzing 3D space structure of a real space to recognize a virtual space, a storage unit for storing an object to be arranged in the virtual space, a display unit for making a display unit display the object arranged in the virtual space, a direction of gravitational force detection unit for detecting a direction of gravitational force of a real space, and a direction of gravitational force reflection unit for reflecting the direction of gravitational force detected by the detection unit in the virtual space.
    Type: Grant
    Filed: March 9, 2011
    Date of Patent: October 7, 2014
    Assignee: Sony Corporation
    Inventors: Hiroyuki Ishige, Kazuhiro Suzuki, Akira Miyashita
  • Patent number: 8848970
    Abstract: A variety of methods and systems involving sensor-equipped portable devices, such as smartphones and tablet computers, are described. One particular embodiment decodes a digital watermark from imagery captured by the device and, by reference to watermark payload data, obtains salient point data corresponding to an object depicted in the imagery. Other embodiments obtain salient point data for an object through use of other technologies (e.g., NFC chips). The salient point data enables the device to interact with the object in a spatially-dependent manner. Many other features and arrangements are also detailed.
    Type: Grant
    Filed: November 2, 2011
    Date of Patent: September 30, 2014
    Assignee: Digimarc Corporation
    Inventors: Joshua V. Aller, Robert Craig Brandis
  • Publication number: 20140285518
    Abstract: To provide a function of easily preventing burn-in at low cost without disturbing mixed reality being experienced by a user, a mixed reality presenting system includes a display control unit configured to display a confirmation image on a display unit when a first time period has elapsed since the display control unit had performed control to start display on the display unit, and to control display on the display unit in response to an operation performed on the confirmation image by a user of the mixed reality presenting system.
    Type: Application
    Filed: March 21, 2014
    Publication date: September 25, 2014
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Yasumi Tanaka, Kenji Hatori
  • Publication number: 20140285517
    Abstract: A display device includes: an input unit configured to receive a content selection command; a storage unit configured to store an image of a user; a controller configured to extract skeleton information of the user from the user image, search for data of an action of an actor related to a content selected by the content selection command, and extract skeleton information of the actor from an image of the actor included in the searched action data; and a display unit, wherein the controller is further configured to generate new action data including the actor image replaced with the user image by mapping the user skeleton information on the actor skeleton information, and control the display unit to display the new action data.
    Type: Application
    Filed: January 29, 2014
    Publication date: September 25, 2014
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Sang-young PARK, Jin-sung LEE, Kil-soo JUNG
  • Patent number: 8842134
    Abstract: The present invention relates to a method for providing information on an object by using viewing frustums. The method includes the steps of: (a) specifying at least two viewing frustums whose vertexes are visual points of respective user terminals; and (b) calculating a degree of interest in the object by referring to the object commonly included in both a first viewing frustum whose vertex is a visual point of a first user terminal and a second one whose vertex is a visual point of a second user terminal.
    Type: Grant
    Filed: September 6, 2013
    Date of Patent: September 23, 2014
    Assignee: Intel Corporation
    Inventors: Tae Hoon Kim, Jung Hee Ryu
  • Publication number: 20140267393
    Abstract: Techniques are disclosed for virtual scene generation. An image depicting a scene and annotated by a sparse set of labels is received. A dense set of labels annotating the image and a density map associated with the image are generated based on the sparse set of labels. A virtual scene is generated based on the dense set of labels and the density map, and the virtual scene is output.
    Type: Application
    Filed: June 28, 2013
    Publication date: September 18, 2014
    Inventors: Kenneth Mitchell, Gwyneth BRADBURY, Tim Alexander Weyrich
  • Publication number: 20140267353
    Abstract: Animation of a thermal image captured by a thermal imager that includes automatically changing particular aspects of the presentation of the image. The coloring of the thermal image may automatically change through two or more color presentations. The colors which may automatically change or be “animated” may be any colors in the usual rainbow of color or in the greyscale. The animation may include a series of small, stepwise incremental changes that gradually change the image. If timed correctly and if the increments are sufficiently small, the transitions of the image may appear smooth, in the manner of a movie or cartoon.
    Type: Application
    Filed: March 15, 2013
    Publication date: September 18, 2014
    Applicant: FLUKE CORPORATION
    Inventors: Matthew F. Schmidt, Jordan B. Schlichting, Thomas Heinke
  • Publication number: 20140267394
    Abstract: A method of tracking at least one emergency service provider is disclosed. An electronic history is compiled that includes at least one identifier of a service provider, at least one identifier of an event to which the service provider responded, and GPS data identifying the geographic location of the service provider at each time interval within the duration of the event. A user interface within which is displayed a first identifier of a first event is generated to a display device. A selection of the event identifier is received from a user. In response to the selection of the identifier, an aerial view of a geographic region within which the first event took place is generated. At least one icon is displayed in the aerial view representing the service provider at the geographic location corresponding to at least one time interval during the event.
    Type: Application
    Filed: March 14, 2014
    Publication date: September 18, 2014
    Inventors: NATHAN WAY, MIKE PRILL
  • Publication number: 20140267723
    Abstract: A facility, comprising systems and methods, for providing enhanced situational awareness to captured image data is disclosed. The disclosed techniques are used in conjunction with image data, such as a real-time or near real-time image stream captured by a camera attached to an unmanned system, previously captured image data, rendered image data, etc. The facility enhances situational awareness by projecting overlays onto captured video data or “wrapping” captured image data with previously-captured and/or “synthetic world” information, such as satellite images, computer-generated images, wire models, textured surfaces, and so on. The facility also provides enhanced zoom techniques that allow a user to quickly zoom in on an object or area of interest using a combined optical and digital zoom technique. Additionally, the facility provides a digital lead indicator designed to reduce operator-induced oscillations in commanding an image capturing device.
    Type: Application
    Filed: January 30, 2013
    Publication date: September 18, 2014
    Inventors: Darcy Davidson, Jr., Theodore T. Trowbridge
  • Patent number: 8836721
    Abstract: The subject matter of this specification can be implemented in, among other things, a method that includes providing a first image from a server system to a client computing device for display and providing a second image from the server system to the client computing device for display. The method also includes providing instructions from the server system to the client computing device for displaying a window over the first image that displays a portion of the second image within the window, where the portion of the second image displayed within the window corresponds to a position of the window over the first image, and where the portions of the second image include one or more rectangular shapes to approximate a curved shape.
    Type: Grant
    Filed: March 9, 2012
    Date of Patent: September 16, 2014
    Assignee: Google Inc.
    Inventors: Marcin Kazimierz Wichary, Kristopher Hom
  • Patent number: 8838381
    Abstract: Described is a system for conveying the spatial location of an object with respect to a user's current location utilizing a video rendering following an automatically generated path from the user's location to the location of the object of interest initiated from the user's current perspective. The system comprises a display device; a virtual three-dimensional model of the user's environment; a visualization creation module; a route planner; and a video rendering following an automatically generated path from the user's location to the location of the object of interest utilizing the visualization creation module, wherein the video rendering displayed on the display device is from a first-person view. Also described is a method of utilizing the system.
    Type: Grant
    Filed: November 10, 2009
    Date of Patent: September 16, 2014
    Assignee: HRL Laboratories, LLC
    Inventors: Michael Daily, Ronald Azuma
  • Patent number: 8836847
    Abstract: A method for capturing an image, comprising: providing a switchable imaging apparatus including a display screen having a first display state and a second transparent state, an optical beam deflector switchable between a first non-deflecting state and a second deflecting state, a camera positioned in a location peripheral to the display screen, and a controller; setting the switchable imaging apparatus to the image capture mode by using the controller to set the display screen to the second transparent state and the optical beam deflector to the second deflecting state; using the camera to capture an image of the scene; setting the switchable imaging apparatus to the image display mode by using the controller to set the display screen to the first display state and the optical beam deflector to the first non-deflecting state; and displaying an image on the display screen.
    Type: Grant
    Filed: May 14, 2013
    Date of Patent: September 16, 2014
    Assignee: Intellectual Ventures Fund 83 LLC
    Inventors: John Norvold Border, Joseph Anthony Manico
  • Patent number: 8830267
    Abstract: A method (700) for providing an augmented reality operations tool to a mobile client (642) positioned in a building (604). The method (700) includes, with a server (660), receiving (720) from the client (642) an augmented reality request for building system equipment (612) managed by an energy management system (EMS) (620). The method (700) includes transmitting (740) a data request for the equipment (612) to the EMS (620) and receiving (750) building management data (634) for the equipment (612). The method (700) includes generating (760) an overlay (656) with an object created based on the building management data (634), which may be sensor data, diagnostic procedures, or the like. The overlay (656) is configured for concurrent display on a display screen (652) of the client (642) with a real-time image of the building equipment (612). The method (700) includes transmitting (770) the overlay (656) to the client (642).
    Type: Grant
    Filed: November 15, 2010
    Date of Patent: September 9, 2014
    Assignee: Alliance for Sustainable Energy, LLC
    Inventor: Larry J. Brackney
  • Patent number: 8832576
    Abstract: Facilitating display of, and interaction with, secure user-centric information via a user platform operated by a user. A user identity is transmitted to an external computing device hosting an identity management server to authenticate the user. After authenticating, a desktop channel grid framework is displayed on the user platform. The channel grid framework includes multiple channels having respective contents represented as multiple user-selectable items, through which respective portions of the secure user-centric information are presented. At least some of the secure user-centric information in at least one channel is based on the user identity, and in displaying the at least one channel as a selectable item, the at least one channel is authenticated by the identity management server.
    Type: Grant
    Filed: April 13, 2011
    Date of Patent: September 9, 2014
    Assignee: Cyandia, Inc.
    Inventors: Michael Wetzer, Thomas Theriault
  • Patent number: 8830264
    Abstract: An method for providing an image to a multiple number of devices is disclosed. The method includes providing a part of the image to a first device, receiving, from a second device, second position information of the second device relative to first position information of the first device, and providing, to the second device, another part of the image corresponding to the position information of the second device.
    Type: Grant
    Filed: October 12, 2012
    Date of Patent: September 9, 2014
    Assignee: KT Corporation
    Inventors: Eui-Seung Son, Won-Yeol Lee, Eun-Kyoung Paik, Hyun-Pyo Kim
  • Patent number: 8823739
    Abstract: A solution for managing a videoconference is provided. Multiple virtual backgrounds can be stored, and a virtual background can be selected to be used for a first participant when he/she is conducting a videoconference with a second participant. The virtual background can be selected based on one or more attributes of the first and/or second participant, one or more attributes of the videoconference, and/or the like. The virtual backgrounds can be utilized, for example, to provide a desired perception, message, and/or the like, of a business entity to individuals outside of the business entity that are interacting with its personnel via videoconferencing.
    Type: Grant
    Filed: August 25, 2010
    Date of Patent: September 2, 2014
    Assignee: International Business Machines Corporation
    Inventors: Rudolf C. Baron, Jr., Andrew R. Jones, Michael L. Massimi, Kevin C. McConnell
  • Patent number: 8817045
    Abstract: Systems and methods of interacting with a virtual space, in which a mobile device is used to electronically capture image data of a real-world object, the image data is used to identify information related to the real-world object, and the information is used to interact with software to control at least one of: (a) an aspect of an electronic game; and (b) a second device local to the mobile device. Contemplated systems and methods can be used to gaming, in which the image data can be used to identify a name of the real-world object, to classify the real-world object, identify the real-world object as a player in the game, to identify the real-world object as a goal object or as having some other value in the game, to use the image data to identify the real-world object as a goal object in the game.
    Type: Grant
    Filed: March 22, 2011
    Date of Patent: August 26, 2014
    Assignee: Nant Holdings IP, LLC
    Inventor: Ronald H. Cohen
  • Patent number: 8817047
    Abstract: A portable device is disclosed. The portable device according to one embodiment includes a camera unit configured to capture an image in front of the portable device, a display unit configured to display a virtual image, and a processor configured to control the camera unit and the display unit, the processor further configured to detect a marker object from the image, display the virtual image corresponding to the marker object based on a position of the marker object when the marker object is detected, detect a position change of the marker object in the image, move the virtual image according to the position change when the position change is detected and obtain a first moving speed of the virtual image or a second moving speed of the marker object, when the first moving speed or the second moving speed is faster than a first reference speed, lower the first moving speed to less than the first reference speed.
    Type: Grant
    Filed: December 27, 2013
    Date of Patent: August 26, 2014
    Assignee: LG Electronics Inc.
    Inventors: Doyoung Lee, Yongsin Kim, Hyorim Park
  • Publication number: 20140232745
    Abstract: A method and device for synthesizing an image for audiovisual communication includes: generating, if a request for synthesizing a sensor image input through a camera sensor and an image designated by a user is input, a synthesis image of the sensor image and the designated image. The method also includes storing, when the synthesis image is generated, object information of an object of the sensor image and object information of an object of the designated image; and changing, if a sensor image is input in which object information of the sensor image object has changed, object information of the designated image object according to the changed object information of the sensor image object and synthesizing the sensor image and the designated image in which object information has changed.
    Type: Application
    Filed: April 29, 2014
    Publication date: August 21, 2014
    Applicant: Samsung Electronics Co., Ltd.
    Inventor: Sung Dae Cho
  • Patent number: 8810598
    Abstract: Interference-based augmented reality hosting platforms are presented. Hosting platforms can include networking nodes capable of analyzing a digital representation of scene to derive interference among elements of the scene. The hosting platform utilizes the interference to adjust the presence of augmented reality objects within an augmented reality experience. Elements of a scene can constructively interfere, enhancing presence of augmented reality objects; or destructively interfere, suppressing presence of augmented reality objects.
    Type: Grant
    Filed: June 30, 2011
    Date of Patent: August 19, 2014
    Assignee: Nant Holdings IP, LLC
    Inventor: Patrick Soon-Shiong
  • Patent number: 8803873
    Abstract: An image display apparatus and an image display method where the image display apparatus according to an embodiment displays a main screen and a sub-screen having a different depth or slope from the main screen so as to create the illusion of depth and distance.
    Type: Grant
    Filed: October 15, 2010
    Date of Patent: August 12, 2014
    Assignee: LG Electronics Inc.
    Inventors: Kyung Hee Yoo, Sang Jun Koo, Sae Hun Jang, Uni Young Kim, Hyung Nam Lee
  • Patent number: 8803915
    Abstract: An information display device improves readability even in cases of an unstable reception condition when information is superimposed on taken images by means of optical space transmission and displayed on the taken images. An imaging section time-sequentially takes images. An information processing section extracts, from regions whose brightness changes with time in images taken, communication information containing information for display of each region based on changes in brightness of the region. The information processing section also generates stability information representing a degree of stability of a communication state of the communication information. A display control section superimposes the extracted information for display contained in the communication information of each region on the taken images, in a mode determined in accordance with the corresponding generated stability information, and displays the information for display superimposed on the images on a display device.
    Type: Grant
    Filed: November 4, 2010
    Date of Patent: August 12, 2014
    Assignee: Panasonic Intellectual Property Corporation of America
    Inventors: Koji Nakanishi, Kazunori Yamada
  • Patent number: 8803916
    Abstract: In some embodiments, a method is provided. The method includes receiving an augmented reality request from a mobile access terminal. The method also includes identifying a context for the augmented reality request and a feature set supported by the mobile access terminal associated with the augmented reality request. The method also includes mapping the identified feature set and the context to a subset of available augmented reality operations. The method also includes executing the subset of available augmented reality operations to generate augmented reality content corresponding to the augmented reality request. The method also includes streaming the augmented reality content to the mobile access terminal associated with the augmented reality request for playback.
    Type: Grant
    Filed: May 3, 2012
    Date of Patent: August 12, 2014
    Assignee: Sprint Communications Company L.P.
    Inventors: Lyle W. Paczkowski, Arun Rajagopal, Matthew Carl Schlesener
  • Patent number: 8803880
    Abstract: This disclosure relates to simulating the light-reflective condition of an object when situated in a given environment. A spatial irradiance mapping of the environment may be obtained, from which a series of directional incidence light sources are determined. The reflective qualities of the object may be modeled as a bi-directional reflection distribution function to be applied to directional incidence light sources. The spatial irradiance mapping and/or bi-directional reflection distribution function may be obtained according to image-based techniques.
    Type: Grant
    Filed: August 21, 2009
    Date of Patent: August 12, 2014
    Assignee: Peking University
    Inventors: Bingfeng Zhou, Jie Feng
  • Publication number: 20140218397
    Abstract: A method, apparatus and computer program product are therefore provided to enable virtual planning and selection of devices. In this regard, the method, apparatus, and computer program product may receive an image, where the image includes a set of image scale values. The image scale values may map the image size to real world units of measure. The image scale values may be used to scale the size of a model of a device, such that the model of the device may be superimposed on the image. The model of the device may be manipulated to view the relationship of the device model to the image to assist with selection of the proper device for a procedure. In this manner, a practitioner may visualize the device in the same scale as the image, so that the practitioner may be assured that the size and scale of the device is accurate.
    Type: Application
    Filed: February 4, 2013
    Publication date: August 7, 2014
    Applicant: MCKESSON FINANCIAL HOLDINGS
    Inventors: Victor Rutman, Ifat Lavi, Ran Kornowski
  • Patent number: 8797324
    Abstract: Determining intersections between rays and triangles is at the heart of most Computer Generated 3D images. The present disclosure describes a new method for determining the intersections between a set of rays and a set of triangles. The method is unique as it processes arbitrary rays and arbitrary primitives, and provides the lower complexity typical to ray-tracing algorithms without making use of a spatial subdivision data structure which would require additional memory storage. Such low memory usage is particularly beneficial to all computer systems creating 3D images where the available on-board memory is limited and critical, and must be minimized. Also, a pivot-based streaming novelty allows minimizing conditional branching inherent to normal ray-tracing techniques by handling large streams of rays. In most cases, our method displays much faster times for solving similar intersection problems than preceding state of the art methods on similar systems.
    Type: Grant
    Filed: January 20, 2012
    Date of Patent: August 5, 2014
    Assignee: UWS Ventures, Ltd.
    Inventor: Benjamin Mora
  • Patent number: 8797351
    Abstract: In a method for graphically representing the surroundings of a motor vehicle, whereby graphical elements, which serve to assist the driver in interpreting the spatial information, contained in the scene image, are superimposed on a scene image, which represents a three dimensional surrounding scene of the surroundings of the motor vehicle and which is out two dimensionally or three dimensionally to the driver of the motor vehicle, the graphical elements are at least partially configured and arranged in the scene image such that they embody in the surrounding scene at least one virtual boundary object, which has a three dimensional spatial shape and which exhibits at least one reference surface that delimits a free space, which can be attained by the motor vehicle without the risk of a collision, from an obstacle space, which can be attained only with the risk of a collision.
    Type: Grant
    Filed: October 29, 2007
    Date of Patent: August 5, 2014
    Assignee: Bayerische Motoren Werke Aktiengesellschaft
    Inventor: Alexander Augst
  • Patent number: 8797352
    Abstract: The invention relates to a method and devices for enabling a user to visualize a virtual model in a real environment. According to the invention, a 2D representation of a 3D virtual object is inserted, in real-time, into the video flows of a camera aimed at a real environment in order to form an enriched video flow. A plurality of cameras generating a plurality of video flows can be simultaneously used to visualize the virtual object in the real environment according to different angles of view. A particular video flow is used to dynamically generate the effects of the real environment on the virtual model. The virtual model can be, for example, a digital copy or virtual enrichments of a real copy. A virtual 2D object, for example the representation of a real person, can be inserted into the enriched video flow.
    Type: Grant
    Filed: August 9, 2006
    Date of Patent: August 5, 2014
    Assignee: Total Immersion
    Inventors: Valentin Lefevre, Jean-Marie Vaidie
  • Patent number: 8786596
    Abstract: Techniques are described for deriving information, including graphical representations, based on perspectives of a 3D scene by utilizing sensor model representations of location points in the 3D scene. A 2D view point representation of a location point is derived based on the sensor model representation. From this information, a data representation can be determined. The 2D view point representation can be used to determine a second 2D view point representation. Other techniques include using sensor model representations of location points associated with dynamic objects in a 3D scene. These sensor model representations are generated using sensor systems having perspectives external to the location points and are used to determine a 3D model associated with a dynamic object. Data or graphical representations may be determined based on the 3D model. A system for obtaining information based on perspectives of a 3D scene includes a data manager and a renderer.
    Type: Grant
    Filed: July 22, 2009
    Date of Patent: July 22, 2014
    Assignee: Disney Enterprises, Inc.
    Inventor: Gregory House
  • Patent number: 8773467
    Abstract: An embodiment of the invention provides a method for displaying information on a portable device, wherein the portable device includes an interface, a camera, and a screen. The interface identifies an object to be installed; and, the camera captures at least one image, wherein the image includes locations for installing objects. The screen displays an augmented image of locations for installing objects, wherein the augmented image identifies at least one optimal location for installing the object.
    Type: Grant
    Filed: June 13, 2011
    Date of Patent: July 8, 2014
    Assignee: International Business Machines Corporation
    Inventors: Mark S. Chen-Quee, Suzanne Carol Deffeyes, Neil Alan Katz, Brian O'Connell
  • Patent number: 8773464
    Abstract: Aspects of the present invention relate to methods and systems for capturing, sharing and recording the information on a collaborative writing surface. According to a first aspect of the present invention, currently persistent collaborative-writing-surface content may be imaged during periods of occlusion by an occluder. According to a second aspect of the present invention, the occluder may be imaged as transparent allowing the visibility of the occluded currently persistent collaborative-writing-surface content. According to a third aspect of the present invention, the occluder may be imaged as a silhouette allowing visibility of the occluded currently persistent collaborative-writing-surface content.
    Type: Grant
    Filed: September 15, 2010
    Date of Patent: July 8, 2014
    Assignee: Sharp Laboratories of America, Inc.
    Inventor: Richard John Campbell
  • Patent number: 8773465
    Abstract: An apparatus for providing navigational information associated with locations of objects includes an imaging device configured to acquire image data, a visual display coupled to the imaging device and configured to display the image data, a position measuring device configured to determine position information associated with the imaging device, and an orientation device configured to determine orientation information associated with the imaging device. The apparatus may also include a rendering system coupled to the visual display, the position measuring device, and the orientation device. The rendering system may be configured to determine image coordinates associated with a location of an object and provide a navigational graphic on the visual display oriented relative to the image coordinates.
    Type: Grant
    Filed: September 11, 2009
    Date of Patent: July 8, 2014
    Assignee: Trimble Navigation Limited
    Inventors: Peter France, Kevin Sharp, Stuart Ralston
  • Patent number: 8760471
    Abstract: A failure analysis apparatus obtains information associated with an operational status of a data center, determines information regarding fault repair work for the data center, based on the information associated with the operational status, and transmits the information regarding the fault repair work to a head mounted display (HMD). The HMD synthesizes and presents computer graphics image data for providing guidance for a method of the fault repair work, with an image of real space, based on the information regarding the fault repair work. After the fault repair work according to the guidance presented by the HMD, if the information associated with the operational status of the data center is newly obtained, the failure analysis apparatus newly determines the information regarding the fault repair work for the data center based on the information associated with the operational status, and transmits the information to the HMD.
    Type: Grant
    Filed: March 30, 2011
    Date of Patent: June 24, 2014
    Assignee: NS Solutions Corporation
    Inventors: Noboru Ihara, Kazuhiro Sasao
  • Patent number: 8760470
    Abstract: An image composition unit outputs a composition image of a physical space and virtual space to a display unit. The image composition unit calculates, as difference information, a half of the difference between an imaging time of the physical space and a generation completion predicted time of the virtual space. The difference information and acquired position and orientation information are transmitted to an image processing apparatus. A line-of-sight position prediction unit updates previous difference information using the received difference information, calculates, as the generation completion predicted time, a time ahead of a receiving time by the updated difference information, and predicts the position and orientation of a viewpoint at the calculated generation completion predicted time using the received position and orientation information. The virtual space based on the predicted position and orientation, and the generation completion predicted time are transmitted to a VHMD.
    Type: Grant
    Filed: July 22, 2009
    Date of Patent: June 24, 2014
    Assignee: Canon Kabushiki Kaisha
    Inventor: Tsutomu Utagawa
  • Publication number: 20140168258
    Abstract: A method, apparatus and computer program product are provided in order to augment an image of a location with a representation of a transient object. In the context of a method, an image of a location to be presented is identified. The method also includes identifying a transient object that is not in the image of the location, but that is anticipated to be at the location during at least one period of time. The method further includes causing the image of the location to be presented with augmentation that provides a representation of the transient object. A corresponding apparatus and computer program product are also provided.
    Type: Application
    Filed: December 13, 2012
    Publication date: June 19, 2014
    Inventors: David Alexander Dearman, Raymond Rischpater, Carmen Au, Jason Wither
  • Publication number: 20140168259
    Abstract: An image processing device includes: a processor; and a memory which stores a plurality of instructions, which when executed by the processor, cause the processor to execute, acquiring an image to be displayed including a two-dimensional code and a positioning pattern superimposed, wherein the two-dimensional code defines information to be displayed on the basis of an image pattern including a first pixel value and a second pixel value, wherein the positioning pattern defines reference coordinates to display the information to be displayed; extracting the two-dimensional code from the image, using a first color component to extract a pixel value identified as the first pixel value and a pixel value identified as the second pixel value; and extracting the positioning pattern from the image using a second color component that identifies as identical to the first pixel value both pixel values that are identified as the first and second pixel values.
    Type: Application
    Filed: September 27, 2013
    Publication date: June 19, 2014
    Applicant: FUJITSU LIMITED
    Inventor: Nobuyasu Yamaguchi
  • Patent number: 8749571
    Abstract: An information processing section of a game apparatus executes a program for implementing a step S100 of acquiring a camera image; a step S200 of detecting a marker; a step S400 of calculating a position and an orientation of a virtual camera; a step S600 of generating an animation in which a hexahedron is caused to appear on the marker when the start of a game is requested; a step S800 of generating an animation in which the hexahedron is unfolded so as to position thereon virtual objects representing targets; a step S900 of mapping the photographed image on objects; a step S1000 of taking an image of the objects by means of the virtual camera; and a step S1100, S1200 of displaying the camera image and an object image which is superimposed on the camera image.
    Type: Grant
    Filed: October 29, 2010
    Date of Patent: June 10, 2014
    Assignees: Nintendo Co., Ltd., Hal Laboratory Inc.
    Inventor: Tetsuya Noge
  • Patent number: 8749583
    Abstract: A first image pickup means picks up an image of a real space, to thereby obtain a first background image. A first positional relationship estimating unit estimates, in a case where the first background image includes a first marker image being an image of a marker previously registered and located in the real space, a first positional relationship being a spatial positional relationship of the first image pickup means to the marker based on the first marker image, and estimates, in a case where the first background image fails to include the first marker image, the first positional relationship based on a displacement of a feature point extracted from the first background image. An image generating unit generates a first additional image based on the first positional relationship, and synthesizes the first additional image and the first background image, to thereby generate a synthetic image.
    Type: Grant
    Filed: December 28, 2010
    Date of Patent: June 10, 2014
    Assignee: Sony Corporation
    Inventors: Kazuhiro Suzuki, Akira Miyashita, Hiroyuki Ishige
  • Patent number: 8743145
    Abstract: Augmented reality may be provided to one or more users in a real-world environment. For instance, information related to a recognized object may be displayed as a visual overlay appearing to be in the vicinity of the object in the real-world environment that the user is currently viewing. The information displayed may be determined based on at least one of captured images and transmissions from other devices. In one example, a portable apparatus receives a transmitted user ID and may submit the user ID to a remote computing device that compares a profile of a user corresponding to the user ID with a profile associated with the portable apparatus for determining, at least in part, information to be displayed as the visual overlay. As another example, the portable apparatus may include a camera to capture images that are analyzed for recognizing objects and identifying other users.
    Type: Grant
    Filed: August 26, 2010
    Date of Patent: June 3, 2014
    Assignee: Amazon Technologies, Inc.
    Inventor: Roy F. Price
  • Patent number: 8743144
    Abstract: There is provided a mobile terminal including a movement information acquisition section for acquiring movement information of a mobile terminal possessed by a user, a picked-up image acquisition section for acquiring a peripheral image of the mobile terminal, which is imaged by an imaging device, a transmission section for transmitting the movement information acquired by the movement information acquisition section to a server device which is connected to the mobile terminal via a network, a reception section for receiving community information which is generated by the server device based on a history of the movement information and which can be shared between the user possessing the mobile terminal and another user other than the user possessing the mobile terminal, and a display control section for displaying, on a display screen, a displayed image included in the community information in a superimposed manner on a peripheral image of the mobile terminal.
    Type: Grant
    Filed: May 24, 2010
    Date of Patent: June 3, 2014
    Assignee: Sony Corporation
    Inventor: Kazutaka Takeshita
  • Publication number: 20140149929
    Abstract: A system and method for providing processable data associated with anatomical images may provide a user interface including a pivot and stem tool via which to align multiple images, a flashlight bar for viewing portions of an overlaid image, and/or user-movable markers for establishing a location of one or more anatomical landmarks with regions of one or more images.
    Type: Application
    Filed: January 31, 2014
    Publication date: May 29, 2014
    Applicant: BOSTON SCIENTIFIC NEUROMODULATION CORPORATION
    Inventors: Jordan BARNHORST, Keith CARLTON, Scott KOKONES, Troy SPARKS, Aaron Omar SHINN, Zhen ZENG
  • Patent number: 8738754
    Abstract: Systems and methods for managing computing systems are provided. One system includes a capture device for capturing environmental inputs, memory storing code comprising a management module, and a processor. The processor, when executing the code comprising the management module, is configured to perform the method below. One method includes capturing an environmental input, identifying a target device in the captured environmental input, and comparing the target device in the captured environmental input to a model of the target device. The method further includes recognizing, in real-time, a status condition of the target device based on the comparison and providing a user with troubleshooting data if the status condition is an error condition. Also provided are physical computer storage mediums including a computer program product for performing the above method.
    Type: Grant
    Filed: April 7, 2011
    Date of Patent: May 27, 2014
    Assignee: International Business Machines Corporation
    Inventor: David T. Windell
  • Patent number: 8736636
    Abstract: A system and method for providing augmented reality (AR) information to a mobile communication terminal in a mobile communication system is provided. If the mobile communication terminal is determined to have entered a service cell providing AR information, the mobile communication terminal transmits an AR information request including position information to a server. Upon receiving the AR information request signal, the server determines AR information including at least one tag pattern provided in the service cell and information associated with the tag pattern and transmits the AR information to the mobile communication terminal.
    Type: Grant
    Filed: August 26, 2010
    Date of Patent: May 27, 2014
    Assignee: Pantech Co., Ltd.
    Inventor: Young-Jo Kang
  • Publication number: 20140139550
    Abstract: Systems and methods for planning and optimizing bone deformity correction treatments using external fixators. A computer system generates a display of a tiltable ellipse superimposed on digital medical image(s) (radiograph), the ellipse representing a ring of an external fixator attachable to the patient's bone. Based on axial and azimuthal ring rotation user input, the system calculates a 3D position of the resulting graphical representation of the ring. User input controls translation of ring(s). Strut position user input identifies 3D positions for the external fixator struts. Based on graphical input defining 3D biological rate-limiting points for treatment, the system calculates a 3D bone correction speed and/or a number of treatment days, and generates a graphical simulation of this treatment. Further, the system generates a correction plan specifying for each strut a daily sequence of strut lengths and preferred strut sizes, to minimize strut replacements.
    Type: Application
    Filed: January 27, 2014
    Publication date: May 22, 2014
    Applicant: ORTHOHUB, INC.
    Inventor: Andrew HASKELL
  • Patent number: 8730265
    Abstract: A character generating system (10) includes a pickup image information acquiring unit (14), a texture generating unit (15), and a texture pasting unit (16). The pickup image information acquiring unit (14) acquires face pickup image information corresponding to an image pasting area (51) of a face texture (53) of character (70) from the pickup image information. The texture generating unit (15), on the basis of color information of a difference area (52), sorts pixels in the image pasting area (51), replaces the color information of the selected pixels with the color information of the difference area (52), and generates the face texture (53) from the face pickup image information.
    Type: Grant
    Filed: February 10, 2012
    Date of Patent: May 20, 2014
    Assignee: Altron Corporation
    Inventors: Masao Kuwabara, Naoto Kominato, Yuki Watanabe, Kazumitsu Moriyama
  • Patent number: 8715087
    Abstract: A method, apparatus and computer program product for a video game including user determined location information is presented. Location information (e.g. GPS, Google Maps, an entered address or the like) determined by a user of a video game is acquired. Then user determined location information relating to a physical location determined by the user is mapped to a video game environment wherein the user of the video game experiences objects from the users entered location while playing the video game.
    Type: Grant
    Filed: April 4, 2012
    Date of Patent: May 6, 2014
    Inventor: David W. Rouille
  • Patent number: 8717420
    Abstract: The invention provides a head mounted image-sensing display device including a pair of image-sensing units (18R, 18L) that stereoscopically capture a physical space and output a pair of stereoscopic images, and display units (13R, 13L) for displaying images for the right eye and images for the left eye. Image sensing parameters that are previously measured for the image-sensing units are stored in image sensing parameter storage units (26R, 26L). As a result, a difference in parallax between captured images of a physical space and images of a virtual object is reduced.
    Type: Grant
    Filed: March 7, 2008
    Date of Patent: May 6, 2014
    Assignee: Canon Kabushiki Kaisha
    Inventors: Toshiyuki Yasuda, Toshiyuki Okuma, Yoshihiro Saito, Toshiki Ishino, Takaaki Nakabayashi
  • Patent number: 8711175
    Abstract: A method and system for creating and using a virtual dressing room, by superimposing a non-linearly stretchable object image onto a base image in a display screen of a communication device, the images being planar. The method comprises scanning an encoded indicium associated with the object image, accessing and uploading a URL associated with the object image, the object image including a plurality of object image critical points, accessing the base image at the communication device, the base image including a plurality of base image critical points respectively corresponding to the object image critical points, re-mapping the object image via global transformation of coordinates associated with the object image critical points, such that the re-mapped object image critical points coincide with the respective base image critical points, and superimposing the re-mapped object image onto the base image for display at the display screen.
    Type: Grant
    Filed: August 12, 2011
    Date of Patent: April 29, 2014
    Assignee: Modiface Inc.
    Inventor: Parham Aarabi
  • Publication number: 20140111541
    Abstract: Apparatus and methods are described for use with a portion of a body of a subject including acquiring a plurality of images of the portion, and displaying one of the images. In response to receiving an input that is indicative of a given location within the image, an indication of a region within the portion is generated on the image. In response to receiving a further input, a dimension of the region that is indicated within the image is modified. In response thereto, at least one tool that is suitable for being placed at the region is identified, and/or a dimension of at least one tool that is suitable for being placed at the region is identified. An output is generated is response to the identification. Other applications are also described.
    Type: Application
    Filed: December 30, 2013
    Publication date: April 24, 2014
    Applicant: SYNC-RX, LTD.
    Inventors: David TOLKOWSKY, Gavriel IDDAN, Ran COHEN