Placing Generated Data In Real Scene Patents (Class 345/632)
  • Patent number: 8558848
    Abstract: Video drive-by data provides a street level view of a neighborhood surrounding a selected geographic location. A video and data server farm incorporates a video storage server that stores video image files containing video drive-by data corresponding to a geographic location, a database server that processes a data query received from a user over the Internet corresponding to a geographic location of interest, and an image processing server. In operation, the database server identifies video image files stored in the video storage server that correspond to the geographic location of interest contained in the data query and transfers the video image files over a pre-processing network to the image processing server. The image processing server converts the video drive-by data to post-processed video data corresponding to a desired image format and transfers the post-processed video data via a post-processing network to the Internet in response to the query.
    Type: Grant
    Filed: March 9, 2013
    Date of Patent: October 15, 2013
    Inventors: William D. Meadow, Randall A. Gordie, Jr.
  • Patent number: 8558924
    Abstract: The camera platform system of the present invention operates at least one function selected from zoom/focus/tilt/pan/iris of a mounted camera or lens in response to an operation instruction output from an operation device. The camera platform system includes a drive control unit configured to convert an operation instruction output from the operation device into a drive control signal corresponding to any one motion of the zoom/focus/tilt/pan/iris; a motion prediction position calculation unit configured to calculate a motion prediction position of any one of the zoom/focus/tilt/pan/iris based on the drive control signal converted by the drive control unit; and a motion prediction position output unit configured to output the motion prediction position, which has been calculated by the motion prediction position calculation unit, as motion prediction position information.
    Type: Grant
    Filed: April 26, 2011
    Date of Patent: October 15, 2013
    Assignee: Canon Kabushiki Kaisha
    Inventor: Kenji Kagei
  • Publication number: 20130265330
    Abstract: Provided is an information processing apparatus, including: an image-taking unit configured to take an image of real scenery to thereby obtain real image; a marker-detecting unit configured to extract a marker image from the real image, the marker image being an image of marker projection light, the marker projection light being projected by a projection device to the real scenery in order to provide spatial first information, the first information being necessary to display virtual information such that the virtual information is superimposed on the real scenery, second information being added to the marker projection light; and an extracting unit configured to extract the second information added to the extracted marker image.
    Type: Application
    Filed: March 28, 2013
    Publication date: October 10, 2013
    Applicant: Sony Corporation
    Inventors: Tetsuro Goto, Masatoshi Ueno, Kenichi Kabasawa, Toshiyuki Nakagawa, Daisuke Kawakami, Shinobu Kuriya, Tsubasa Tsukahara, Hisako Sugano
  • Patent number: 8553049
    Abstract: An information-processing apparatus determines whether a stimulation generation unit and a background virtual object contact each other based on position and orientation information about the stimulation generation unit and position and orientation information about the background virtual object. If it is determined that the stimulation generation unit and the background virtual object contact each other, the information-processing apparatus determines whether the stimulation generation unit is included within an attention range. The information-processing apparatus generates operation setting information for controlling an operation of the stimulation generation unit according to a result of the determination and outputs the generated operation setting information to the stimulation generation unit.
    Type: Grant
    Filed: September 9, 2008
    Date of Patent: October 8, 2013
    Assignee: Canon Kabushiki Kaisha
    Inventors: Atsushi Nogami, Naoki Nishimura
  • Publication number: 20130257905
    Abstract: An image processing apparatus includes a synthesizing unit which synthesizes a drawn image drawn on a displayed image on a display device with the displayed image; a content data receiving unit which receives content data; a content data conversion unit which converts the received content data to image data; an external snapshot generation unit which generates an external snapshot to be displayed on the display unit based on the converted image data and stores the external snapshot in a storage unit; and a display control unit which displays the synthesized image by the synthesizing unit on the display device and also displays an object corresponding to the external snapshot in a selectable manner so that the external snapshot is displayed on the display device when the object is selected.
    Type: Application
    Filed: March 27, 2013
    Publication date: October 3, 2013
    Applicant: RICOH COMPANY, LTD.
    Inventor: Eiji KEMMOCHI
  • Patent number: 8547402
    Abstract: Methods, systems, and related computer program products for processing and displaying computer-aided detection (CAD) information associated with medical breast x-ray images, such as breast x-ray tomosynthesis volumes, are described. An interactive graphical user interface for displaying a tomosynthesis data volume is described that includes a display of a two-dimensional composited image having slabbed sub-images spatially localized to marked CAD findings.
    Type: Grant
    Filed: October 6, 2010
    Date of Patent: October 1, 2013
    Assignee: Hologic, Inc.
    Inventors: Kevin A. Kreeger, Julian Marshall, Georgia K. Hitzke, Haili Chui
  • Patent number: 8547401
    Abstract: A portable device configured to provide an augmented reality experience is provided. The portable device has a display screen configured to display a real world scene. The device includes an image capture device associated with the display screen. The image capture device is configured to capture image data representing the real world scene. The device includes image recognition logic configured to analyze the image data representing the real world scene. Image generation logic responsive to the image recognition logic is included. The image generation logic is configured to incorporate an additional image into the real world scene. A computer readable medium and a system providing an augmented reality environment are also provided.
    Type: Grant
    Filed: August 19, 2004
    Date of Patent: October 1, 2013
    Assignee: Sony Computer Entertainment Inc.
    Inventors: Dominic S. Mallinson, Richard L. Marks
  • Patent number: 8547399
    Abstract: An image processing apparatus which includes a extension width determination unit for determining a extension width based on a depression time of the cursor at a reference position on an image display unit where a releasing operation of the cursor was performed, which is a time during which the cursor had been kept depressed until the releasing operation was performed, and an ornament piece arrangement unit for arranging the plurality of ornament pieces at positions radially extended away from the reference position with the extension width determined by the extension width determination unit.
    Type: Grant
    Filed: May 26, 2009
    Date of Patent: October 1, 2013
    Assignee: Facebook, Inc.
    Inventors: Yukita Gotohda, Karin Kon
  • Patent number: 8542250
    Abstract: An entertainment device for combining virtual images with real images captured by a video camera so as to generate augmented reality images. The device comprises receiving means operable to receive a sequence of video images from the video camera via a communications link. The device further comprises detecting means operable to detect an augmented reality marker within the received video images, and processing means operable to generate a virtual image plane in dependence upon the detection of the augmented reality marker by the detecting means. The virtual image plane is arranged to be substantially coplanar with a real surface upon which the augmented reality marker is placed so that virtual images may be generated with respect to the real surface.
    Type: Grant
    Filed: August 17, 2009
    Date of Patent: September 24, 2013
    Assignee: Sony Computer Entertainment Europe Limited
    Inventors: Nathan James Baseley, Nicolas Doucet
  • Publication number: 20130226261
    Abstract: A system and method for displaying a volume of activation (VOA) may include a processor that displays via a display device a model of a portion of a patient anatomy that includes anatomical structures, displays via the display device and overlying the display of the model a VOA associated by the processor with a set of anatomical stimulation parameter settings, the display of the VOA, and graphically identifies interactions between the displayed VOA and a first subset of the anatomical structures associated with one or more stimulation benefits and a second subset of the anatomical structures associated with one or more stimulation side effects, where the graphical identifications differ depending on whether the interaction is with the first subset or the second subset.
    Type: Application
    Filed: March 14, 2013
    Publication date: August 29, 2013
    Applicant: INTELECT MEDICAL, INC.
    Inventors: Troy SPARKS, Jordan BARNHORST, David Arthur BLUM, Keith CARLTON, Scott KOKONES, Engin ERDOGAN, Brian James HOFFER, Arna Diana IONESCU, David Ari LUBENSKY
  • Publication number: 20130222425
    Abstract: A first image processing apparatus displays markers on a monitor to thereby make a second image processing apparatus perform a display control of a second object on an imaged image of an LCD while the second image processing apparatus transmits a marker recognizing signal when the display control is performed based on the markers to thereby make the first image processing apparatus perform a display control of a first object on the monitor.
    Type: Application
    Filed: April 10, 2013
    Publication date: August 29, 2013
    Applicant: Nintendo Co., Ltd.
    Inventor: Nintendo Co., Ltd.
  • Patent number: 8521821
    Abstract: Sending and receiving encrypted emails. At a web browser, user input is received requesting a compose email page user interface for a web-based email system. The compose email page user interface is requested from a server for the web-based mail system. Web page code is received from the server for the compose email page user interface. The web page code for the compose email page user interface is parsed to determine screen locations of one or more user input interface elements. The compose email page user interface is rendered in the browser. One or more browser-based interface elements implemented integral to the browser are overlaid onto the compose email page user interface. User input is received in the browser user interface elements. The user input received is encrypted. The encrypted user input is transferred into one or more elements of the compose email page user interface.
    Type: Grant
    Filed: March 17, 2009
    Date of Patent: August 27, 2013
    Assignee: Brigham Young University
    Inventors: Timothy W. van der Horst, Kent Eldon Seamons
  • Patent number: 8514250
    Abstract: A display generation system that is able to generate display signals for an underlay image with at least one embedded safety pattern and display images for an overlay image. The display generation system and method are able to determine whether there are any anomalies or graphical errors when an overlay display generated by the system or generated by some other system is displayed simultaneously with the underlay image with the embedded safety pattern. The display generation system uses the embedded safety pattern to detect the occurrence of anomalies in the simultaneous display and uses information from its own generated overlay image to detect graphical errors in the simultaneous display. Flight display systems for aircraft can use the display generation system and method of the present invention to display an underlay image depicting geographical scenery in the vicinity of the aircraft while on the ground, during takeoff or in flight.
    Type: Grant
    Filed: May 23, 2008
    Date of Patent: August 20, 2013
    Assignee: Universal Avionics Systems Corporation
    Inventors: Hubert Naimer, Patrick Gerald Krohn, Patrick Kemp Glaze, John Russell Jorgensen
  • Publication number: 20130201212
    Abstract: Systems and methods for planning and optimizing bone deformity correction treatments using external fixators. A computer system generates a display of a tiltable ellipse superimposed on digital medical image(s) (radiograph), the ellipse representing a ring of an external fixator attachable to the patient's bone. Based on axial and azimuthal ring rotation user input, the system calculates a 3D position of the resulting graphical representation of the ring. User input controls translation of ring(s). Strut position user input identifies 3D positions for the external fixator struts. Based on graphical input defining 3D biological rate-limiting points for treatment, the system calculates a 3D bone correction speed and/or a number of treatment days, and generates a graphical simulation of this treatment. Further, the system generates a correction plan specifying for each strut a daily sequence of strut lengths and preferred strut sizes, to minimize strut replacements.
    Type: Application
    Filed: February 4, 2013
    Publication date: August 8, 2013
    Applicant: ORTHOHUB, INC.
    Inventor: ORTHOHUB, INC.
  • Publication number: 20130201210
    Abstract: In some embodiments, first information indicative of an image of a scene is accessed. One or more reference features are detected, the reference features being associated with a reference object in the image. A transformation between an image space and a real-world space is determined based on the first information. Second information indicative of input from a user is accessed, the second information identifying an image-space distance in the image space corresponding to a real-world distance of interest in the real-world space. The real-world distance of interest is then estimated based on the second information and the determined transformation.
    Type: Application
    Filed: July 31, 2012
    Publication date: August 8, 2013
    Applicant: QUALCOMM Incorporated
    Inventors: Sundeep Vaddadi, Krishnakanth S. Chimalamarri, John H. Hong, Chong U. Lee
  • Publication number: 20130201213
    Abstract: Medical diagnostic images that are embedded with contact messages about a professional are sent to a patient for sharing with others. Although the images may be obtained using any of a variety of imaging devices, in some instances, the images may be obtained using a soft-tissue-injury diagnostic system for diagnosing soft tissue injury within a patient. A visual display is configured and arranged for receiving and displaying the medical diagnostic image. Personalized contact messages are then embedded within the image as a watermark to with information about the professional, such as an electronic business card, contact information, or hyperlinks to additional information. The sharable image with embedded messages is electronically sent to the patient for sharing with others. Personalized contact messages may also be embedded in non-medical images for sharing with others.
    Type: Application
    Filed: March 14, 2013
    Publication date: August 8, 2013
    Applicant: PRECISION BIOMETRICS INC.
    Inventor: PRECISION BIOMETRICS INC.
  • Publication number: 20130201211
    Abstract: A mobile terminal and controlling method thereof are disclosed, by which side information is facilitated to be inserted or modified in a multimedia content using an optically readable code. The present invention includes the steps of acquiring an image, acquiring side information on the acquired image, converting the acquired side information to an optically readable digital code, and saving the digital code and the image in a manner of linking the digital code and the image to each other.
    Type: Application
    Filed: February 1, 2013
    Publication date: August 8, 2013
    Applicant: LG ELECTRONICS INC.
    Inventor: LG ELECTRONICS INC.
  • Publication number: 20130194303
    Abstract: A marking device for a marking operation to mark a presence of an absence of one or more underground facilities is configured to access and display facilities map information, and/or other image information, as a visual aid to facilitate the marking operation. In various aspects, methods and apparatus relate to: selection of an “base” facilities map, or information from a database of facilities map data, relating to a given work side/dig area; selection of an pan and/or zoom (resolution) for displaying facilities map information; updating displayed facilities map information while a marking device is used during a marking operation (e.g. changing pan, zoom and/or orientation); overlaying on the displayed facilities map information marking information and/or landmark information relating to the marking operation; and storing locally on the marking device, and/or transmitting from the marking device, facilities map information and/or overlaid marking/landmark information (e.g.
    Type: Application
    Filed: March 12, 2013
    Publication date: August 1, 2013
    Inventors: Steven Nielsen, Curtis Chambers, Jeffrey Farr
  • Patent number: 8499250
    Abstract: A system and method for interacting with multiple information forms across multiple types of computing devices and platforms is provided. A computer-readable storage media for interacting with multiple information forms across computing devices is also provided and includes computer-readable instructions to cause one or more computer processors to execute operations including authenticating a user; establishing a channel grid framework for the user on a first platform, the channel grid framework providing access to a first computer application displayed on a display device as a channel; deploying the channel from the channel grid framework on the first platform to a second platform in response to a user selecting the channel from the channel grid framework on the first platform and performing a copy operation of the channel to the second platform; and establishing a run time application of the first computer application on the second platform.
    Type: Grant
    Filed: May 13, 2009
    Date of Patent: July 30, 2013
    Assignee: Cyandia, Inc.
    Inventors: Michael Wetzer, Thomas Theriault, Mark Dingman, Rupert Key
  • Publication number: 20130187949
    Abstract: Video drive-by data provides a street level view of a neighborhood surrounding a selected geographic location. A video and data server farm incorporates a video storage server that stores video image files containing video drive-by data corresponding to a geographic location, a database server that processes a data query received from a user over the Internet corresponding to a geographic location of interest, and an image processing server. In operation, the database server identifies video image files stored in the video storage server that correspond to the geographic location of interest contained in the data query and transfers the video image files over a pre-processing network to the image processing server. The image processing server converts the video drive-by data to post-processed video data corresponding to a desired image format and transfers the post-processed video data via a post-processing network to the Internet in response to the query.
    Type: Application
    Filed: March 9, 2013
    Publication date: July 25, 2013
    Applicant: MV Patents, LLC
    Inventors: William D. Meadow, Randall A. Gordie, JR.
  • Patent number: 8493408
    Abstract: A multi-step animation sequence for smoothly transitioning from a map view to a panorama view of a specified location is disclosed. An orientation overlay can be displayed on the panorama, showing a direction and angular extent of the field of view of the panorama. An initial specified location and a current location of the panorama can also be displayed on the orientation overlay. A navigable placeholder panorama to be displayed in place of a panorama at the specified location when panorama data is not available is disclosed. A perspective view of a street name annotation can be laid on the surface of a street in the panorama.
    Type: Grant
    Filed: November 19, 2008
    Date of Patent: July 23, 2013
    Assignee: Apple Inc.
    Inventors: Richard Williamson, Christopher Blumenberg, Mike Matas, Kimon Tsinteris, Ryan Staake, Alex Kan
  • Patent number: 8487962
    Abstract: An augmented reality system for integrating video imagery of an actual dental restoration into a computer-implemented display of a model (that represents a preparation, mesial/distal neighbors, and opposing occlusion) that has been generated from a 3D scan of a patient. The 3D scan data may be generated at a dental office remote from a location at which the augmented reality system is implemented. In one embodiment, the 3D scan data is provided to the augmented reality system as a digital impression.
    Type: Grant
    Filed: March 6, 2007
    Date of Patent: July 16, 2013
    Assignee: D4D Technologies, LLC
    Inventors: Henley S. Quadling, Mark S. Quadling
  • Patent number: 8477246
    Abstract: Methods, systems, products and devices are implemented for editing video image frames. According to one such method, image content is embedded into video. A selection input is received for a candidate location in a video frame of the video. The candidate location is traced in subsequent video frames of the video by approximating three-dimensional camera motion between two frames using a model that compensates for camera rotations, camera translations and zooming, and by optimizing the approximation using statistical modeling of three-dimensional camera motion between video frames. Image content is embedded in the candidate location in the subsequent video frames of the video based upon the tracking thereof.
    Type: Grant
    Filed: July 9, 2009
    Date of Patent: July 2, 2013
    Assignee: The Board of Trustees of the Leland Stanford Junior University
    Inventors: Ashutosh Saxena, Siddharth Batra, Andrew Y. Ng
  • Publication number: 20130162665
    Abstract: Mapping or navigation incorporates a real-world view. A map representing a region as a first computer generated graphic is displayed. The map may alternatively be a satellite view. A route is indicated on the map. The route is a computer generated graphic. A real-world image of a view from a location or sequence of image from locations along the route is overlaid on the map, such as being in a small box on the map.
    Type: Application
    Filed: December 21, 2011
    Publication date: June 27, 2013
    Inventor: James D. Lynch
  • Publication number: 20130162672
    Abstract: An image processing device includes a composite processing unit that generates a composite image for display by including a through image generated based on a captured image signal obtained by performing photoelectric conversion for light incident from a subject in an image for a compositing process and composites a captured image of a recording time point in the image for a compositing process in accordance with a compositing arrangement state of the through image in the composite image for display, and a display control unit that performs display control for the composite image for display.
    Type: Application
    Filed: November 29, 2012
    Publication date: June 27, 2013
    Applicant: Sony Corporation
    Inventor: Sony Corporation
  • Patent number: 8471866
    Abstract: A user interface and method for identifying related information displayed in an ultrasound system are provided. A medical image display of the ultrasound system includes a first region configured to display a medical image having color coded portions and a second region configured to display non-image data related to the medical image displayed in the first region. The non-image data is color coded to associate the non-image data with the color coded portions of the medical image.
    Type: Grant
    Filed: May 5, 2006
    Date of Patent: June 25, 2013
    Assignee: General Electric Company
    Inventors: Zvi Friedman, Sergei Goldenberg, Peter Lysyansky
  • Patent number: 8457392
    Abstract: A representation of an object in an image of a live event is obtained by determining a color profile of the object. The color profile may be determined from the image in real time and compared to stored color profiles to determine a best match. For example, the color profile of the representation of the object can be obtained by classifying color data of the representation of the object into different bins of a color space, in a histogram of color data. The stored color profiles may be indexed to object identifiers, object viewpoints, or object orientations. Color data which is common to different objects or to a background color may be excluded. Further, a template can be used as an additional aid in identifying the representation of the object. The template can include, e.g., a model of the object or pixel data of the object from a prior image.
    Type: Grant
    Filed: December 19, 2007
    Date of Patent: June 4, 2013
    Assignee: Sportvision, Inc.
    Inventors: Richard H. Cavallaro, Vidya Elangovan, Marvin S. White, Kenneth A. Milnes
  • Patent number: 8446514
    Abstract: A method for capturing an image, comprising: providing a switchable imaging apparatus including a display screen having a first display state and a second transparent state, an optical beam deflector switchable between a first non-deflecting state and a second deflecting state, a camera positioned in a location peripheral to the display screen, and a controller; setting the switchable imaging apparatus to the image capture mode by using the controller to set the display screen to the second transparent state and the optical beam deflector to the second deflecting state; using the camera to capture an image of the scene; setting the switchable imaging apparatus to the image display mode by using the controller to set the display screen to the first display state and the optical beam deflector to the first non-deflecting state; and displaying an image on the display screen.
    Type: Grant
    Filed: May 9, 2011
    Date of Patent: May 21, 2013
    Assignee: Intellectual Ventures Fund 83 LLC
    Inventors: John Norvold Border, Joseph Anthony Manico
  • Publication number: 20130113826
    Abstract: Provided is an image processing apparatus including an image processing unit for combining a virtual object with a captured image. The image processing unit determines the virtual object based on a size, in a real space, of an object shown in the captured image.
    Type: Application
    Filed: August 28, 2012
    Publication date: May 9, 2013
    Applicant: SONY CORPORATION
    Inventor: Reiko MIYAZAKI
  • Patent number: 8436852
    Abstract: Image editing which is consistent with geometry of a scene depicted in the image is described. In an embodiment a graphical user interface (GUI) is provided to enable a user to simply and quickly specify four corners of a rectangular frame drawn onto a source image using the GUI. In embodiments, the four corners are used to compute parameters of a virtual camera assumed to capture the image of the drawn frame. Embodiments of an image processing system are described which use the virtual camera parameters to control editing of the source image in ways consistent with the 3D geometry of the scene depicted in that image. In some embodiments out of bounds images are formed and/or realistic-looking shadows are synthesized. In examples, users are able to edit images and the virtual camera parameters are dynamically recomputed and used to update the edited image.
    Type: Grant
    Filed: February 9, 2009
    Date of Patent: May 7, 2013
    Assignee: Microsoft Corporation
    Inventors: Antonio Criminisi, Carsten Rother, Gavin Smyth, Amit Shesh
  • Patent number: 8432414
    Abstract: In a view, e.g. of scenery, of a shopping or museum display, or of a meeting or conference, automated processing can be used to annotate objects which are visible from a viewer position. Annotation can be of objects selected by the viewer, and can be displayed visually, for example, with or without an image of the view.
    Type: Grant
    Filed: March 26, 2001
    Date of Patent: April 30, 2013
    Assignee: Ecole Polytechnique Federale de Lausanne
    Inventors: Martin Vetterli, Serge Ayer
  • Patent number: 8433370
    Abstract: A mobile terminal and controlling method thereof are disclosed, by which object information on an object in a preview image is held in the preview image to be utilized in various ways. The present invention includes displaying a preview image photographed via a camera, obtaining a current position of the mobile terminal, searching object information on at least one object in the preview image based on the current position of the mobile terminal, displaying the found object information within the preview image, if the displayed the object information is selected from the preview image, holding the selected object information within the preview image, and controlling an operation related to the held object information.
    Type: Grant
    Filed: January 25, 2011
    Date of Patent: April 30, 2013
    Assignee: LG Electronics Inc.
    Inventor: Eunsoo Jung
  • Patent number: 8427506
    Abstract: A first image processing apparatus displays markers on a monitor to thereby make a second image processing apparatus perform a display control of a second object on an imaged image of an LCD while the second image processing apparatus transmits a marker recognizing signal when the display control is performed based on the markers to thereby make the first image processing apparatus perform a display control of a first object on the monitor.
    Type: Grant
    Filed: August 27, 2010
    Date of Patent: April 23, 2013
    Assignee: Nintendo Co., Ltd.
    Inventor: Shunsaku Kato
  • Patent number: 8405680
    Abstract: Various apparatus and methods are described for a simulation and/or user interface environment. Image data and actual measured depth data of a real object from a real-world scene is acquired via one or more sensors. There is matching of the resolution of the image data and actual measured depth data as well as a linking formed between the image data and actual measured depth data for each pixel representing the real-world object in the scene. The actual measured depth data of the real-world object is processed to extract normal vector data to the surface of the real-world object. occlusion, lighting, and any physics-based interaction effects are based on at least the image data and actual measured depth data.
    Type: Grant
    Filed: April 19, 2010
    Date of Patent: March 26, 2013
    Assignee: YDreams S.A., A Public Limited Liability Company
    Inventors: Gonçalo Cardoso Lopes, João Pedro Gomes da Silva Frazão, André Rui Soares Pereira de Almeida, Nuno Ricardo Sequeira Cardoso, Ivan de Almeida Soares Franco, Nuno Moura e Silva Cruces
  • Patent number: 8405658
    Abstract: One embodiment of the invention sets forth a technique for shading a graphics object inserted into a video feed of a real-world scene based on lighting conditions in the real-world scene. The real-world scene includes a fiducial marker denotes the location in the video feed where the graphics object should be inserted. In order to shade the graphics object, the AR application computes light color values at multiple points on the fiducial marker. The color computation module also computes the direction of light cast on the fiducial marker by determining the direction of the shadow cast by a pyramid object on the fiducial marker. The AR application then shades the graphics object inserted into the video feed at the location of the fiducial marker based on the light color values and the direction of light.
    Type: Grant
    Filed: September 14, 2009
    Date of Patent: March 26, 2013
    Assignee: AUTODESK, Inc.
    Inventors: Eddy Yim Kuo, Brian Anthony Pene
  • Patent number: 8384742
    Abstract: Methods and apparatus for facilitating detection of a presence or an absence of at least one underground facility within a dig area. Source data representing one or more input images of a geographic area including the dig area is electronically received at a first user location, which may be remote from the dig area. The source data is processed so as to display at least a portion of the input image(s) on a display device. One or more indicators are added to the displayed input image(s), via a user input device associated with the display device, to provide at least one indication of the dig area and thereby generate a marked-up digital image. In the case of a staged excavation project, the input image, or a plurality of associated images, may include indicia of multiple dig areas corresponding to multiple stages of the staged excavation project.
    Type: Grant
    Filed: June 1, 2009
    Date of Patent: February 26, 2013
    Assignee: Certusview Technologies, LLC
    Inventors: Steven E. Nielsen, Curtis Chambers, Jeffrey Farr
  • Patent number: 8379043
    Abstract: An apparatus and method allows a driver to read a map, directions or text while driving a car without taking their eyes away from the road. An electronic display device may be mounted to, or made integrally with, a dashboard of an automobile. The electronic display device may be programmed to display inverted (mirror image) information thereupon. When the electronic display is disposed on the automobile's dashboard, the windshield may reflect the inverted image as a normal scene for the driver to visualize. Since the image is displayed on the windshield, the driver can see the image without having to take their eyes off the road.
    Type: Grant
    Filed: March 20, 2012
    Date of Patent: February 19, 2013
    Inventor: David Lester Paige
  • Publication number: 20130038632
    Abstract: Provided herein are methods and systems for image registration from multiple sources. A method for image registration includes rendering a common field of interest that reflects a presence of a plurality of elements, wherein at least one of the elements is a remote element located remotely from another of the elements and updating the common field of interest such that the presence of the at least one of the elements is registered relative to another of the elements.
    Type: Application
    Filed: August 12, 2011
    Publication date: February 14, 2013
    Inventors: Marcus W. Dillavou, Phillip Corey Shum, Baron L. Guthrie, Mahesh B. Shenai, Drew Steven Deaton, Matthew Benton May
  • Patent number: 8358320
    Abstract: A method and system which seamlessly combines natural way of handwriting (real world) with interactive digital media and technologies (virtual world) for providing a mixed or augmented reality perception to the user is disclosed.
    Type: Grant
    Filed: November 3, 2008
    Date of Patent: January 22, 2013
    Assignee: National University of Singapore
    Inventors: Steven ZhiYing Zhou, Syed Omer Gilani
  • Patent number: 8356255
    Abstract: Methods and apparatus for facilitating detection of a presence or an absence of at least one underground facility within a dig area. Source data representing one or more input images of a geographic area including the dig area is electronically received at a first user location, which may be remote from the dig area. The source data is processed so as to display at least a portion of the input image(s) on a display device. One or more indicators are added to the displayed input image(s), via a user input device associated with the display device, to provide at least one indication of the dig area and thereby generate a marked-up digital image. In the case of a staged excavation project, the input image, or a plurality of associated images, may include indicia of multiple dig areas corresponding to multiple stages of the staged excavation project.
    Type: Grant
    Filed: June 1, 2009
    Date of Patent: January 15, 2013
    Assignee: Certusview Technologies, LLC
    Inventors: Steven E. Nielsen, Curtis Chambers, Jeffrey Farr
  • Publication number: 20120330659
    Abstract: An information processing device includes a display data creating unit configured to create display data including characters representing the content of an utterance based on a sound and a symbol surrounding the characters and indicating a first direction, and an image combining unit configured to determine the position of the display data based on a display position of an image representing a sound source of the utterance, and to combine the display data and the image of the sound source so that an orientation in which the sound is radiated is matched with the first direction.
    Type: Application
    Filed: June 21, 2012
    Publication date: December 27, 2012
    Applicant: HONDA MOTOR CO., LTD.
    Inventor: Kazuhiro NAKADAI
  • Publication number: 20120327113
    Abstract: A system and method for viewing artificial reality messages, such as at an event at a venue, where the messages are geo-referenced, artificial reality words or symbols and enhanced for greater comprehension or relevancy to the user. Typically, the messages are geo-referenced to a moving participant or to a fixed location at the venue. Using the spectator's chosen location as the viewing origin, an artificial reality message or product is inserted into the spectator's perspective view of the venue. The enhancement involves changing the content for context, or changing the perspective, orientation, size, background, font, or lighting for comprehension.
    Type: Application
    Filed: September 6, 2012
    Publication date: December 27, 2012
    Inventor: Charles D. Huston
  • Patent number: 8330811
    Abstract: Methods and an apparatus responsive to sensed orientation and translatory position of an image acquisition device with respect to a three-dimensional reference frame of an object space for providing and storing successive computer generated images with respect to a three-dimensional frame of reference of an image space synchronized with successive images acquired by the electronic image acquisition device with respect to the three-dimensional reference frame of the object space, the successive computer generated images having a changing point of view that changes direction between images, the successive computer generated images stored on a storage medium for playback and presentation to a viewer.
    Type: Grant
    Filed: February 19, 2010
    Date of Patent: December 11, 2012
    Assignee: Simulated Percepts, LLC
    Inventor: Francis J. Macguire, Jr.
  • Publication number: 20120306916
    Abstract: A wear amount measuring device includes an image display unit, an image processing unit, and a wear amount computing unit. The image display unit displays a real object image based on real object image data containing a wear amount measurement target and a reference part, and displays a plan image based on design plan data containing the wear amount measurement target and the reference part. The image processing unit executes an image processing of overlapping the real object image and the plan image at an equal scale on a corresponding positional relation when the reference parts are matched. The wear amount computing unit computes a wear amount based on a magnitude of an interval between a measurement contour line drawn along a contour of the wear amount measurement target in the real object image and a plan contour line in the plan image.
    Type: Application
    Filed: January 19, 2011
    Publication date: December 6, 2012
    Applicant: KOMATSU LTD.
    Inventors: Shigeto Marumoto, Hideyuki Wakai, Yukihiro Suzaki, Daijirou Itou, Tomoyuki Tsubaki, Kenichi Hisamatsu
  • Publication number: 20120306853
    Abstract: A method, medium, and virtual object for providing a virtual representation with an attribute are described. The virtual representation is generated based on a digitization of a real-world object. Properties of the virtual representation, such as colors, shape similarities, volume, surface area, and the like are identified and an amount or degree of exhibition of those properties by the virtual representation is determined. The properties are employed to identify attributes associated with the virtual representation, such as temperature, weight, or sharpness of an edge, among other attributes of the virtual object. A degree of exhibition of the attributes is also determined based on the properties and their degrees of exhibition. Thereby, the virtual representation is provided with one or more attributes that instruct presentation and interactions of the virtual representation in a virtual world.
    Type: Application
    Filed: July 20, 2011
    Publication date: December 6, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: SHAWN C. WRIGHT, JEFFREY JESUS EVERTT, JUSTIN AVRAM CLARK, CHRISTOPHER HARLEY WILLOUGHBY, MIKE SCAVEZZE, MICHAEL A. SPALDING, KEVIN GEISNER, DANIEL L. OSBORN
  • Publication number: 20120299961
    Abstract: Techniques for augmenting an image of an object captured and displayed in real time with associated content are disclosed. In one embodiment, the method for augmenting the image includes receiving information defining a sampled frame of a video being captured by an electronic device in substantially real time, determining information representative of an object captured in the sampled frame based on the received information, causing the determined information to match stored information defining a plurality of items to locate an item matched to the captured object, retrieving content associated with the matched item, and providing the retrieved content for display with the captured image on the electronic device. The retrieved content may be rendered in an overlay element that overlays the captured image displayed on the electronic device. The rendered content is configured to enable a user to interact with the content.
    Type: Application
    Filed: May 27, 2011
    Publication date: November 29, 2012
    Applicant: A9.com, Inc.
    Inventors: Gurumurthy D. Ramkumar, William F. Stasior, Bryan E. Feldman, Arnab S. Dhua, Nalin Pradeep Senthamil
  • Publication number: 20120287122
    Abstract: The various embodiments herein provide a virtual apparel fitting system and a method for displaying the plurality of apparels virtually on the user. The virtual apparel fitting system includes an image capturing device and a digital screen. The image capturing device captures an image of a user and the digital screen recognizes one or more physical statistics of the user from the captured image. Further the user selects a plurality of apparels from the apparel library and the plurality of apparels are displayed virtually on the captured image of the user in the digital screen with a prediction on the accurate size and fit of the plurality of apparels on the user.
    Type: Application
    Filed: May 8, 2012
    Publication date: November 15, 2012
    Applicant: TELIBRAHMA CONVERGENT COMMUNICATIONS PVT. LTD.
    Inventors: RAJESH PAUL NADAR, RAVI BANGALORE RAMARAO, DEEPESH JAYAPRAKASH, SURESH NARASIMHA
  • Patent number: 8311384
    Abstract: An image processing method receives a sequence of image frames and generates a computer generated object. The method combines the object with the sequence of frames to generate a sequence of augmented reality images. The method divides each received image frame into a respective array of image motion cells, detects inter-image motion in successive image frames for each motion cell, and generates a motion object comprising one or more image motion cells. The image motion cells in the motion object correspond to a set of image motion cells detected as comprising inter-image motion over a threshold number of image frames. The method detects relative distance between the object and the motion object within the augmented reality images, and generates a point of interest within a current image frame so the object may appear to interact with an image region corresponding to an image motion cell at the point of interest.
    Type: Grant
    Filed: September 7, 2010
    Date of Patent: November 13, 2012
    Assignee: Sony Computer Entertainment Europe Limited
    Inventors: Sharwin Winesh Raghoebardajal, Mark Lintott
  • Patent number: 8294082
    Abstract: A probe for use in a coordinate digitizing system includes an indicator, such as a pointing tip or crosshairs, and a marker, the location of which can be determined by a marker tracking system relative to a coordinate system. The probe is configured to effectively place the marker's virtual image—as seen by the tracker—at the same location as the indicator without blocking a user's view of the indicator.
    Type: Grant
    Filed: November 14, 2007
    Date of Patent: October 23, 2012
    Assignee: Boulder Innovation Group, Inc.
    Inventors: Juris George Melkis, Ivan Faul, Dennis Toms
  • Publication number: 20120262484
    Abstract: Embodiments of the present invention are generally directed to devices, methods and instructions encoded on computer readable media for capturing motion and analyzing the captured motion at a portable computing device. In one exemplary embodiment, a motion capture and analysis application is provided. The application, when executed on a portable computing device, is configured to capture video of a subject (i.e., person) while the subject performs a selected action. The motion capture and analysis application provides various tools that allow an application user (e.g., trainer) to evaluate the motion of the subject during performance of the action.
    Type: Application
    Filed: March 14, 2012
    Publication date: October 18, 2012
    Applicant: KINESIOCAPTURE, LLC
    Inventors: David William Gottfeld, Robert Douglas Harris, Todd Austin Wright