Placing Generated Data In Real Scene Patents (Class 345/632)
  • Patent number: 9805283
    Abstract: An image processing apparatus for synthesizing a first image including a transparency-process region and a non-transparency region, and a second image includes a memory to store transparency-process color information for performing a transparency-process, and circuitry to extract the non-transparency region and a part of the transparency-process region adjacent with each other as a process target region, comparing a color value of the non-transparency region and the first color value of the transparency-process region, changing the transparency-process color information from a first color value to a second color value depending on a comparison result, the second color value set closer to the color value of the non-transparency region, applying the second color value to the transparency-process region, enlarging the process target region, performing the transparency process to the enlarged process target region based on the second color value; and superimposing the enlarged process target region on the second
    Type: Grant
    Filed: July 26, 2016
    Date of Patent: October 31, 2017
    Assignee: RICOH COMPANY, LTD.
    Inventor: Tatsuroh Sugioka
  • Patent number: 9789392
    Abstract: A method, apparatus and non-transitory computer readable medium that, in one embodiment, interprets a user motion sequence comprises beginning a session, capturing the user motion sequence via a motion capturing device during the session, processing, via a processor, the user motion sequence into a predetermined data format, comparing the processed user motion sequence to at least one predetermined motion sequence stored in a database and determining whether to perform at least one of interpreting the user motion sequence as a universal command and registering the user motion sequence as a new command.
    Type: Grant
    Filed: July 8, 2011
    Date of Patent: October 17, 2017
    Assignee: Open Invention Network LLC
    Inventor: Carey Leigh Lotzer
  • Patent number: 9788166
    Abstract: A method, an apparatus, and a system for screening AR content is presented. The method includes shooting, by a terminal, a panoramic photo of a location of the terminal, and determining location information of the terminal and related information of a first target according to the panoramic photo; sending, by the terminal to a server, a first request message that carries an identifier of the terminal and the location information of the terminal; receiving, by the terminal, a first response message that carries at least one behavior and is from the server; sending, by the terminal, a second request message to the server; receiving, by the terminal, a second response message that carries AR content corresponding to the first behavior; and presenting, by the terminal, the first target and the AR content after combination according to the related information of the first target.
    Type: Grant
    Filed: December 1, 2015
    Date of Patent: October 10, 2017
    Assignee: Huawei Device Co., Ltd.
    Inventors: Guoqing Li, Xinmiao Chang, Zhihao Jin
  • Patent number: 9782965
    Abstract: In an embodiment of the present invention, a number printing machine is provided. The machine includes at least one pallet, and an input to input at least the height and width of, and spacing between, a plurality of numbers to be printed on a substrate. The machine also includes a controller responsive to the input to control movement of the at least one pallet to print the plurality of numbers on the substrate in registration. In another embodiment, a method for printing numbers in registration on a substrate is provided. The method includes the steps of inputting into an input the length, width, and spacing between numbers to be printed, and positioning the substrate in response to the input such that the numbers are printed in registration.
    Type: Grant
    Filed: August 28, 2013
    Date of Patent: October 10, 2017
    Assignee: M&R Printing Equipment, Inc.
    Inventors: Richard C. Hoffman, Jr., Boguslaw W. Magda, Bernabe Christopher Mauban
  • Patent number: 9764217
    Abstract: Techniques for interactive scorekeeping and animation generation are described, including evaluating a play to form an event datum associated with execution of the play, using the event datum to form an event packet, generating an animation using the event packet, the animation being associated with the execution of the play, and presenting the animation on an endpoint.
    Type: Grant
    Filed: September 9, 2013
    Date of Patent: September 19, 2017
    Assignee: EyezOnBaseball, LLC
    Inventors: Kevin M. Tillman, Scot Gillis
  • Patent number: 9740935
    Abstract: A maintenance assistance system and method of operating are provided.
    Type: Grant
    Filed: November 26, 2013
    Date of Patent: August 22, 2017
    Assignee: HONEYWELL INTERNATIONAL INC.
    Inventors: Matej Dusik, Dinkar Mylaraswamy, Jiri Vasek, Jindrich Finda, Michal Kosik
  • Patent number: 9716842
    Abstract: An augmented experience improves user experience by including virtual reflections of an actual background on virtual items presented in a user interface. An augmented image comprising a representation of a virtual item with one or more reflective surfaces is generated and presented in a user interface. Virtual reflections based on images of an actual background acquired by a camera are generated. The virtual reflections are superimposed on the one or more reflective surfaces of the virtual item for presentation of the augmented image. During presentation of the virtual item, the inclusion of the virtual reflection may improve overall realism of the virtual item.
    Type: Grant
    Filed: June 19, 2013
    Date of Patent: July 25, 2017
    Assignee: Amazon Technologies, Inc.
    Inventors: Connor Spencer Blue Worley, Devin Bertrum Pauley
  • Patent number: 9712846
    Abstract: The present invention relates to a method for encoding and decoding an image signal and to corresponding apparatuses therefor. In particular, during the encoding and/or decoding of an image signal filtering with at least two filters is performed. The sequence of the filter application and possibly the filters are selected and the filtering is applied in the selected filtering order and with the selected filters. The determination of the sequence of applying the filters may be performed either separately in the same way at the encoder and at the decoder, or, it may be determined at the encoder and signaled to the decoder.
    Type: Grant
    Filed: April 11, 2011
    Date of Patent: July 18, 2017
    Assignee: SUN PATENT TRUST
    Inventors: Matthias Narroschke, Hisao Sasai
  • Patent number: 9686508
    Abstract: A user device within a communication architecture, the user device comprising: an image capture device configured to determine image data and intrinsic/extrinsic capture device data for the creation of a video channel defining a shared scene; a surface reconstruction entity configured to determine surface reconstruction data associated with the image data from the image capture device; a video channel configured to encode and packetize the image data and intrinsic/extrinsic capture device data; a surface reconstruction channel configured to encode and packetize the surface reconstruction data; a transmitter configured to transmit the video and surface reconstruction channel packets; and a bandwidth controller configured to control the bandwidth allocated to the video channel and the surface reconstruction channel.
    Type: Grant
    Filed: December 2, 2016
    Date of Patent: June 20, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Juri Reitel, Martin Ellis, Andrei Birjukov, ZhiCheng Miao, Ryan S. Menezes
  • Patent number: 9679415
    Abstract: In order to easily and rapidly determine a subject region in a captured image using a single image, a synthesized image of an extraction image including a subject and an instruction image representing a first acquisition region from which color information on the subject is acquired is used. Color information on the subject is acquired from the first acquisition region represented by the instruction image in the synthesized image. Color information on a background of the subject is acquired from a region not including the first acquisition region and a color information non-acquisition region that is adjacent to the first acquisition region and is set in advance in the synthesized image. On the basis of the color information on the subject and the color information on the background, extraction information of the subject is determined and output.
    Type: Grant
    Filed: June 2, 2015
    Date of Patent: June 13, 2017
    Assignee: Canon Kabushiki Kaisha
    Inventor: Kazuki Takemoto
  • Patent number: 9626773
    Abstract: Technologies are generally described for systems and methods effective to detect an alteration in augmented reality. A processor may receive a real image that corresponds to a real object and may receive augmented reality instructions to generate a virtual object. The processor may determine that the virtual object at least partially obscures the real object when the virtual object is rendered on a display. The processor may, upon determining that the virtual object at least partially obscures the real object when the virtual object is rendered on the display, simulate an activity on the real object to produce a first activity simulation and simulate the activity on the virtual object to produce a second activity simulation. The processor may determine a difference between the first and the second activity simulation and modify the augmented reality instructions to generate a modified virtual object in response to the determination of the difference.
    Type: Grant
    Filed: September 9, 2013
    Date of Patent: April 18, 2017
    Assignee: Empire Technology Development LLC
    Inventors: Gad S. Sheaffer, Shmuel Ur, Shay Bushinsky, Vlad Grigore Dabija
  • Patent number: 9578226
    Abstract: Photometric registration from an arbitrary geometry for augmented reality is performed using video frames of an environment captured by a camera. A surface reconstruction of the environment is generated. A pose is determined for the camera with respect to the environment, e.g., using model based tracking using the surface reconstruction. Illumination data for the environment is determined from a video frame. Estimated lighting conditions for the environment are generated based on the surface reconstruction and the illumination data. For example, the surface reconstruction may be used to compute the possible radiance transfer, which may be compressed, e.g., using spherical harmonic basis functions, and used in the lighting conditions estimation. A virtual object may then be rendered based on the lighting conditions.
    Type: Grant
    Filed: July 31, 2012
    Date of Patent: February 21, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Lukas Gruber, Thomas Richter-Trummer, Dieter Schmalstieg
  • Patent number: 9563983
    Abstract: Augmented reality overlays for display to a user of an augmented reality device are managed. An object of is received. As received, the object is associated with an augmented reality marker within a field of view of the user. Information associated with the object is for display in an overlay. The speed of the augmented reality marker relative to the user is calculated. This speed of the augmented reality marker is compared with a threshold value. Responsive to the speed of the augmented reality marker being greater than the threshold value, the information is filtered out from the overlay, and the filtered overlay is displayed on the augmented reality device.
    Type: Grant
    Filed: March 4, 2015
    Date of Patent: February 7, 2017
    Assignee: International Business Machines Corporation
    Inventors: Chris Bean, Sophie D. Green, Stephen R. F. Head, Madeleine R. Neil Smith
  • Patent number: 9563815
    Abstract: In a first exemplary embodiment of the present invention, an automated, computerized method is provided for processing an image. According to a feature of the present invention, the method comprises the steps of, at a computer system, receiving an image recorded at and transmitted from a remote source, at the computer system, processing the transmitted image, for segregation of the transmitted image into a corresponding intrinsic image and, at the computer system, processing the corresponding intrinsic image to perform a preselected task, for a result. As a further feature of the first exemplary embodiment of the present invention, the remote source comprises a cell phone or remote computer system.
    Type: Grant
    Filed: September 15, 2009
    Date of Patent: February 7, 2017
    Assignee: Tandent Vision Science, Inc.
    Inventors: Matthias M. de Haan, Richard Mark Friedhoff, Timothy King Rodgers, Jr., Casey Arthur Smith, Youngrock Yoon
  • Patent number: 9558594
    Abstract: An index extraction unit detects indices from a sensed image sensed by a sensing unit which senses an image of a physical space on which a plurality of indices is laid out. A convergence arithmetic unit calculates position and orientation information of the sensing unit based on the detected indices. A CG rendering unit generates a virtual space image based on the position and orientation information. A sensed image clipping unit extracts, as a display image, an image in a display target region from the sensed image. An image composition unit generates a composite image by compositing the extracted display image and the generated virtual space image. A display unit displays the composite image.
    Type: Grant
    Filed: February 23, 2015
    Date of Patent: January 31, 2017
    Assignee: Canon Kabushiki Kaisha
    Inventors: Makoto Oikawa, Takuya Tsujimoto
  • Patent number: 9557951
    Abstract: Augmented reality overlays for display to a user of an augmented reality device are managed. An object of is received. As received, the object is associated with an augmented reality marker within a field of view of the user. Information associated with the object is for display in an overlay. The speed of the augmented reality marker relative to the user is calculated. This speed of the augmented reality marker is compared with a threshold value. Responsive to the speed of the augmented reality marker being greater than the threshold value, the information is filtered out from the overlay, and the filtered overlay is displayed on the augmented reality device.
    Type: Grant
    Filed: July 28, 2015
    Date of Patent: January 31, 2017
    Assignee: International Business Machines Corporation
    Inventors: Chris Bean, Sophie D. Green, Stephen R. F. Head, Madeleine R. Neil Smith
  • Patent number: 9524028
    Abstract: Embodiments of the invention recognize human visual gestures, as captured by image and video sensors, to develop a visual language for a variety of human computer interfaces. One embodiment provides a method for recognizing a hand gesture positioned by a user hand. The method includes steps of capturing a digital color image of a user hand against a background, applying a general parametric model to the digital color image of the user hand to generate a specific parametric template of the user hand, receiving a second digital image of the user hand positioned to represent a hand gesture, detecting a hand contour of the hand gesture based at least in part on the specific parametric template of the user hand, and recognizing the hand gesture based at least in part on the detected hand contour. Other embodiments include recognizing hand gestures, facial gestures or body gestures captured in a video.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: December 20, 2016
    Assignee: FastVDO LLC
    Inventors: Wei Dai, Madhu Peringassery Krishnan, Pankaj Topiwala
  • Patent number: 9508174
    Abstract: A non-transitory storage medium stores instructions executable by a display device including an image taking device and a display. The instructions cause the display device to perform: displaying a real-space image being taken by the image taking device; displaying a content disposed in an augmented reality space and the real-space image in combination, when a marker associated with the content exists in the real-space image; keeping displaying the content, when an instruction for keeping displaying the content is provided with the content being displayed; displaying the content when the instruction is not provided and when the marker exists in the real-space image; and not displaying the content when the instruction is not provided and when the marker does not exist in the real-space image.
    Type: Grant
    Filed: February 2, 2015
    Date of Patent: November 29, 2016
    Assignee: BROTHER KOGYO KABUSHIKI KAISHA
    Inventor: Yu Matsubara
  • Patent number: 9495802
    Abstract: A method includes acquiring a first image including a specific object and captured at an imaging position, generating first three-dimensional information based on a first shape of the specific object, the first three-dimensional information corresponding to the imaging position, generating second three-dimensional information based on a specific depth value and a designated position on the first image, generating first line information based on the first and the second three-dimensional information, acquiring a second image including the specific object and captured at another imaging position, generating third three-dimensional information based on a second shape of the specific object, the third three-dimensional information corresponding to the another imaging position, generating second line information based on the second and the third three-dimensional information, generating a fourth three-dimensional information based on the first and the second line information, and storing the fourth three-dimension
    Type: Grant
    Filed: March 23, 2015
    Date of Patent: November 15, 2016
    Assignee: FUJITSU LIMITED
    Inventor: Ryouta Komatsu
  • Patent number: 9466089
    Abstract: A signal processor of the invention includes a host processor, a command queue, a graphics decoding circuit, a video decoding circuit, a composition engine and two display buffers. The host processor generates graphics commands and sets a video flag to active based on graphics encoded data, video encoded data and mask encoded data from a network. The command queue asserts a control signal according to the graphics commands. The graphics decoding circuit generates the graphics frame and two surface mask while the video decoding circuit generates the video frame and a video mask. The composition engine transfers the graphics frame, the video frame or a content of one of two display buffers to the other display buffer according to the video mask and the two surface masks when the control signal is asserted or when the video flag is active.
    Type: Grant
    Filed: October 7, 2014
    Date of Patent: October 11, 2016
    Assignee: ASPEED TECHNOLOGY INC.
    Inventor: Chung-Yen Lu
  • Patent number: 9424536
    Abstract: A system and method for facilitating integrating enterprise data from multiple sources for display via in a common interface. An example method includes displaying, via a first user interface display screen, a first set of one or more personnel icons representative of one or more enterprise personnel, and providing a first user option to select one or more of the personnel icons. A second user interface display screen may be displayed in response to or after selection of one or more of the personnel icons. The second user interface display screen presents a first type of data. The second user interface display screen further provides a second user option to select one or more user interface features associated with the first type of data, and to then trigger display of a third user interface display screen. The third user interface display presents a second type of data that is associated with the first type of data.
    Type: Grant
    Filed: May 31, 2011
    Date of Patent: August 23, 2016
    Assignee: Oracle International Corporation
    Inventors: Mary E. G. Bear, Amy Christine Wilson, Prashant Singh, Hugh Zhang, Brendon Glazer
  • Patent number: 9418378
    Abstract: A method, system and computer program are provided to allow one or more users to try out one or more products. The method, system and computer program involve providing information on one or more products offered for sale under restricted conditions of time or quantity, obtaining a real image of an object, determining one or more features from the object image, determining a position to overlay a first product image on the object image based on the determined one or more features, overlaying the first product image on the object image based on the determined position to provide an overlaid image, and displaying the overlaid image. The first product image is an image of a product from the one or more products.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: August 16, 2016
    Assignee: Gilt Groupe, Inc.
    Inventors: Ioana Ruxandra Staicut, Jesse Boyes, Allison Ann Sall, Yonatan Feldman
  • Patent number: 9395875
    Abstract: The present disclosure provides an exemplary system, method, and computer program product. The exemplary method includes communicating with a provider of digital content to request data regarding a specific item of interest using a mobile device associated with a user. The method further includes receiving the requested data regarding the specific item of interest from the provider of digital content on the mobile device. The method further includes rendering the received data regarding the specific item of interest on a display of the mobile device, the rendering including superimposing the received data on a street view. The method further includes navigating through the street view according to navigation functions of the mobile device to locate a physical commercial location that carries the specific item of interest.
    Type: Grant
    Filed: June 27, 2012
    Date of Patent: July 19, 2016
    Assignee: eBay, Inc.
    Inventor: Chad Anthony Geraci
  • Patent number: 9310882
    Abstract: A method for interfacing with an interactive program is provided. The method includes: capturing images of first and second pages of a book object, the first and second pages being joined along a fold axis defined along a spine of the book; analyzing the captured images to identify a first tag on the first page and a second tag on the second page; tracking movement of the first and second pages by tracking the first and second tags, respectively; generating augmented images by replacing, in the captured images, the book object with a virtual book, the virtual book having a first virtual page corresponding to the first page of the book object, the virtual book having a second virtual page corresponding to the second page of the book object; rendering first and second scenes on the first and second virtual pages, respectively; and presenting the augmented images on a display.
    Type: Grant
    Filed: February 6, 2013
    Date of Patent: April 12, 2016
    Assignee: Sony Computer Entertainment Europe Ltd.
    Inventor: Masami Kochi
  • Patent number: 9299160
    Abstract: One exemplary embodiment involves identifying a plane defined by a plurality of three-dimensional (3D) track points rendered on a two-dimensional (2D) display, wherein the 3D track points are rendered at a plurality of corresponding locations of a video frame. The embodiment also involves displaying a target marker at the plane defined by the 3D track points to allow for visualization of the plane, wherein the target marker is displayed at an angle that corresponds with an angle of the plane. Additionally, the embodiment involves inserting a 3D object at a location in the plane defined by the 3D track points to be embedded into the video frame. The location of the 3D object is based at least in part on the target marker.
    Type: Grant
    Filed: June 25, 2012
    Date of Patent: March 29, 2016
    Assignee: Adobe Systems Incorporated
    Inventors: James Acquavella, David Simons, Daniel M. Wilk
  • Patent number: 9294678
    Abstract: A display control apparatus performs control so as to enlarge and display a portion of an image in response to a TELE operation performed while the entire image is being normally displayed, and performs control so as to change an enlargement area of the image to an area corresponding to an in-focus position within an area of the image in response to a press of a set button while the image is being enlarged and displayed. The set button has a function of displaying a FUNK menu when the set button is pressed while the entire image is being normally displayed.
    Type: Grant
    Filed: August 14, 2014
    Date of Patent: March 22, 2016
    Assignee: Canon Kabushiki Kaisha
    Inventor: Ryo Takahashi
  • Patent number: 9282287
    Abstract: Systems and methods are disclosed for real-time video transformations in video conferences. A method includes receiving, by a processing device, a request from a first participant of a video conference to modify a video stream. The method further includes identifying, by the processing device, a foreground and a background within the video stream. The method further includes generating, by the processing device, a modified video stream including a video or image inserted into the background.
    Type: Grant
    Filed: September 9, 2014
    Date of Patent: March 8, 2016
    Assignee: Google Inc.
    Inventor: Brian David Marsh
  • Patent number: 9274595
    Abstract: A method for navigating concurrently and from point-to-point through multiple reality models is described. The method includes: generating, at a processor, a first navigatable virtual view of a first location of interest, wherein the first location of interest is one of a first virtual location and a first non-virtual location; and concurrently with the generating the first navigatable virtual view of the first location of interest, generating, at the processor, a second navigatable virtual view corresponding to a current physical position of an object, such that real-time sight at the current physical position is enabled within the second navigatable virtual view.
    Type: Grant
    Filed: August 24, 2012
    Date of Patent: March 1, 2016
    Assignee: REINCLOUD Corporation
    Inventor: Dan Reitan
  • Patent number: 9277281
    Abstract: An apparatus for providing object-in-content information that includes a central control unit which creates and provides basic content information, receives the object-in-content information and provides the object-in-content information in a user-viewable format. The apparatus also features an object information interface unit which transmits a message containing the basic content information provided by the central control unit to an object-in-content information managing device, receives a message containing object-in-content information corresponding to the basic content information from the object-in-content information managing device, and transmits the object-in-content information contained in the received message to the central control unit.
    Type: Grant
    Filed: December 31, 2003
    Date of Patent: March 1, 2016
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Dong-shin Jung, Kyoung-hoon Yi
  • Patent number: 9251590
    Abstract: Camera pose estimation for 3D reconstruction is described, for example, to enable position and orientation of a depth camera moving in an environment to be tracked for robotics, gaming and other applications. In various embodiments, depth observations from the mobile depth camera are aligned with surfaces of a 3D model of the environment in order to find an updated position and orientation of the mobile depth camera which facilitates the alignment. For example, the mobile depth camera is moved through the environment in order to build a 3D reconstruction of surfaces in the environment which may be stored as the 3D model. In examples, an initial estimate of the pose of the mobile depth camera is obtained and then updated by using a parallelized optimization process in real time.
    Type: Grant
    Filed: January 24, 2013
    Date of Patent: February 2, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Toby Sharp, Andrew William Fitzgibbon, Shahram Izadi
  • Patent number: 9245185
    Abstract: A terminal for generating an augmented reality and a method thereof are provided. The terminal includes a camera property information providing server configured to store camera property information associated with one or more cameras, and the terminal is configured to receive, from the camera property information providing server, camera property information associated with a camera included in the terminal and generate an augmented reality based on the stored camera property information when an augmented reality-based application is driven and thus, may provide an augmented reality in which a virtual object is accurately matched to an image.
    Type: Grant
    Filed: December 3, 2013
    Date of Patent: January 26, 2016
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Kyu-Sung Cho, Dae-Kyu Shin, Ik-Hwan Cho
  • Patent number: 9182584
    Abstract: A method and system to evaluate stare-time of a selected target by a pointing system is provided. In an embodiment, the method includes specifying a time period for evaluation. A processor simulates movement of selected celestial bodies during the time period and movement of the platform during the time period. The processor further simulates pointing the pointing system in each celestial direction during the time period. The method calculates stare-time in each celestial direction uninterrupted by the selected celestial bodies and the platform during the time period.
    Type: Grant
    Filed: July 31, 2012
    Date of Patent: November 10, 2015
    Assignee: HONEYWELL INTERNATIONAL INC.
    Inventors: Elliott Rachlin, David J. Dopilka
  • Patent number: 9159300
    Abstract: A display apparatus includes: a location acquisition unit that acquires an image location designated by an operation of the user in the planar image of the user symbol; a direction setting unit that specifies an area to which the image location of the user symbol belongs from among a plurality of areas into which the planar image is divided and with which a predetermined direction is associated in advance, respectively, and sets, as a direction of eyes of the user, a direction that is associated in advance with the area specified; and a display control unit that displays the planar image in which the seat symbols and the user symbol are arranged in such a way of being associated with a seat layout in a room visually recognized when the user views in the direction of eyes.
    Type: Grant
    Filed: November 7, 2013
    Date of Patent: October 13, 2015
    Assignee: P&W Solutions Co., Ltd.
    Inventor: Toshiyuki Omiya
  • Patent number: 9135753
    Abstract: A method of augmented reality interaction for repositioning a virtual object on an image of a surface comprises capturing successive video images of the surface and first and second control objects and defining an interaction start area over the surface with respect to the virtual object. The method detects the control objects in successive video images, detects whether the control objects are brought together over the interaction start area, and if so, analyzes a region of successive video images using optical flow analysis to determine the overall direction of motion of the control objects and augmenting the video image to show the virtual object being held by the control objects. Augmenting the video image itself comprises superposing a graphical effect on the video image prior to superposition of the virtual object, such that the graphical effect visually disconnects the virtual object from the video image in the resulting augmented image.
    Type: Grant
    Filed: March 1, 2013
    Date of Patent: September 15, 2015
    Assignee: Sony Computer Entertainment Europe Limited
    Inventor: Sharwin Winesh Raghoebardayal
  • Patent number: 9135539
    Abstract: A method and system is provided for printing a barcode computed based on content of a printed page data system on a document, the method comprising: using a methodology of regular expression; applying a regular expression subsystem to text contained in a print stream generating data; converting the data to a barcode applying barcode computation, to produce a bitmap barcode representing the barcode; inserting the bitmap barcode into an Enhanced Meta File (EMF) print stream defined by the printing system in the Port Monitor or Print Processor subsystem; inserting the bitmap barcode into an XML Paper Specification (or XPS) print stream defined by the printing system in the Port Monitor or Print Processor subsystem; and combining the barcode with the print stream to contain the barcode in a specific position in the print stream in the Port Monitor or Print Processor subsystem.
    Type: Grant
    Filed: April 23, 2014
    Date of Patent: September 15, 2015
    Assignee: Black Ice Software, LLC
    Inventor: Jozsef Nemeth
  • Patent number: 9121697
    Abstract: A wear amount measuring device includes an image display unit, an image processing unit, and a wear amount computing unit. The image display unit displays a real object image based on real object image data containing a wear amount measurement target and a reference part, and displays a plan image based on design plan data containing the wear amount measurement target and the reference part. The image processing unit executes an image processing of overlapping the real object image and the plan image at an equal scale on a corresponding positional relation when the reference parts are matched. The wear amount computing unit computes a wear amount based on a magnitude of an interval between a measurement contour line drawn along a contour of the wear amount measurement target in the real object image and a plan contour line in the plan image.
    Type: Grant
    Filed: January 19, 2011
    Date of Patent: September 1, 2015
    Assignee: KOMATSU LTD.
    Inventors: Shigeto Marumoto, Hideyuki Wakai, Yukihiro Suzaki, Daijirou Itou, Tomoyuki Tsubaki, Kenichi Hisamatsu
  • Publication number: 20150130833
    Abstract: The embodiments of the present disclosure relate to electronic technology and provide a map superposition method and an electronic device, capable of solving the problem associated with low positioning accuracy and limited information available for users due to lack of detailed map information on a small area. A first image is obtained. First identification information contained in the first image and second identification information contained in a first map are adjusted and compared. The first image is superimposed on the first map to obtain a second map when the comparison indicates that a predetermined condition is satisfied. Operations associated with a map application are performed based on the second map. The embodiments of the present disclosure solves the problem associated with low positioning accuracy and limited information available due to lack of detailed map information on a small area and improves the user experience.
    Type: Application
    Filed: September 26, 2014
    Publication date: May 14, 2015
    Applicants: LENOVO (BEIJING) LIMITED, BEIJING LENOVO SOFTWARE LTD.
    Inventor: Yufeng MAO
  • Patent number: 9030492
    Abstract: A method for overlaying AR objects on an environmental image representing the environment includes recording a depth image of the environment from a point of vision; modifying the representation of an AR object to be placed in the environmental image in terms of how it appears from the point of vision at a pre-defined spot in the environmental image; determining how the parts of the AR object facing the point of vision are arranged in relation to an associated image point of the depth image, from the point of vision; modifying at least the representation of parts of the AR object in a pre-determined manner in relation to the apparent depth in the image; and overlaying the processed AR object on the environmental image. A device for overlaying AR objects on an environmental image displaying the environment operates according to the method steps.
    Type: Grant
    Filed: February 25, 2006
    Date of Patent: May 12, 2015
    Assignee: KUKA Roboter GmbH
    Inventors: Rainer Bischoff, Arif Kazi
  • Patent number: 9030494
    Abstract: An information processing apparatus including an imaged image input unit inputting an imaged image of a facility imaged in an imaging device to a display control unit, a measurement information input unit inputting measurement information measured by a sensor provided in the facility from the sensor to a creation unit, a creation unit creating a virtual image representing a status of an outside or inside of the facility based on the measurement information input by the measurement information input unit, and a display control unit overlaying and displaying the virtual image created in the creation unit and the imaged image input by the imaged image input unit on a display device.
    Type: Grant
    Filed: July 6, 2012
    Date of Patent: May 12, 2015
    Assignee: NS Solutions Corporation
    Inventors: Noboru Ihara, Kazuhiro Sasao, Masaru Yokoyama, Arata Sakurai, Ricardo Musashi Okamoto
  • Patent number: 9026947
    Abstract: A mobile terminal is provided that includes a camera module for capturing an image; a position-location module for obtaining first information regarding a current location of the mobile terminal; a display unit for displaying the captured image and a viewpoint indicator on the captured image, the viewpoint indicator displayed at a point corresponding to the obtained first information and indicating a direction; a user input unit for receiving an input of a shift command signal for shifting the display of the viewpoint indicator; and a controller for controlling the display unit to shift the display of the viewpoint indicator to a point corresponding to the shift command signal and to display second information regarding at least one entity oriented about the mobile terminal along the indicated direction.
    Type: Grant
    Filed: January 26, 2011
    Date of Patent: May 5, 2015
    Assignee: LG Electronics Inc.
    Inventors: Choonsik Lee, Donghyun Lee
  • Patent number: 9026940
    Abstract: The mobile terminal includes a wireless communication unit configured to receive terminal position information and object related information of at least one object corresponding to the terminal position information; a display module configured to display at least one object indicator indicating the at least one object and display a storage target object indicator region on a background image corresponding to the terminal position information; a user input unit configured to receive a selection of an object indicator; a memory configured to store object related information; and a controller configured to control the components of the mobile terminal. The display module is further configured to display an identifier corresponding to the selected object indicator within the storage target object indicator region, and display an object item list including an object item corresponding to the stored object related information.
    Type: Grant
    Filed: May 18, 2011
    Date of Patent: May 5, 2015
    Assignee: LG Electronics Inc.
    Inventors: Sungho Jung, Jieun Lee
  • Publication number: 20150116353
    Abstract: An object of the present invention is to provide an image processing device and the like that can generate a composite image in a desired focusing condition. In a smartphone 1, an edge detecting section 107 detects an edge as a feature from a plurality of input images taken with different focusing distances, and detects the intensity of the edge as a feature value. A depth estimating section 111 then estimates the depth of a target pixel, which is information representing which of the plurality of input images is in focus at the target pixel, by using the edge intensity detected by the edge detecting section 107. A depth map generating section 113 then generates a depth map based on the estimation results by the depth estimating section 111.
    Type: Application
    Filed: October 29, 2014
    Publication date: April 30, 2015
    Inventors: TAKESHI MIURA, SHOGO SAWAI, YASUHIRO KURODA, HIROSHI KOYAMA, KAZUHIRO HIRAMOTO, KOSUKE ISHIGURO, RYO ONO
  • Patent number: 9013483
    Abstract: An object region detection unit (130) decides the region of a physical object of interest in a physical space image. An image manipulation unit (140) performs shading processing of an inclusion region including the decided region. A rendering unit (155) arranges a virtual object in virtual space at the position and orientation of the physical object of interest and generates a virtual space image based on the position and orientation of the user's viewpoint. A composition unit (160) generates a composite image by superimposing the virtual space image on the physical space image that has undergone the shading processing and outputs the generated composite image to an HMD (190).
    Type: Grant
    Filed: October 9, 2008
    Date of Patent: April 21, 2015
    Assignee: Canon Kabushiki Kaisha
    Inventors: Kazuki Takemoto, Takashi Aso
  • Patent number: 9013507
    Abstract: A method includes defining a surface within a first captured image of an environment. The defined surface is identified in a second captured image of the environment. A graphic is overlaid on the surface identified in the second captured image. The second captured image is caused to be displayed to preview the graphic in the environment.
    Type: Grant
    Filed: March 4, 2011
    Date of Patent: April 21, 2015
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Jean-Frederic Plante, Eric G Wiesner, David Edmondson
  • Patent number: 9013505
    Abstract: A mobile data service is delivered to a mobile device having a camera for capturing a live scene, a GPS receiver for determining geographic coordinates of the mobile device, and a display for displaying the live scene. A wireless network exchanges data signals with the mobile device. A server stores a database of virtual objects defined within a geographic space. The server receives the geographic coordinates of the mobile device. The server identifies one or more virtual objects within a field of view in the geographic space determined in response to the geographic coordinates of the mobile device. The server transmits representation image data to the mobile device corresponding to the one or more virtual objects as would be seen according to the field of view. The mobile device generates a representation on the display in response to the representation image data.
    Type: Grant
    Filed: November 27, 2007
    Date of Patent: April 21, 2015
    Assignee: Sprint Communications Company L.P.
    Inventor: Seth A. Thornton
  • Patent number: 9007445
    Abstract: An index extraction unit detects indices from a sensed image sensed by a sensing unit which senses an image of a physical space on which a plurality of indices is laid out. A convergence arithmetic unit calculates position and orientation information of the sensing unit based on the detected indices. A CG rendering unit generates a virtual space image based on the position and orientation information. A sensed image clipping unit extracts, as a display image, an image in a display target region from the sensed image. An image composition unit generates a composite image by compositing the extracted display image and the generated virtual space image. A display unit displays the composite image.
    Type: Grant
    Filed: September 30, 2009
    Date of Patent: April 14, 2015
    Assignee: Canon Kabushiki Kaisha
    Inventors: Makoto Oikawa, Takuya Tsujimoto
  • Patent number: 9008353
    Abstract: A variety of methods and systems involving sensor-equipped portable devices, such as smartphones and tablet computers, are described. One particular embodiment decodes a digital watermark from imagery captured by the device and, by reference to watermark payload data, obtains salient point data corresponding to an object depicted in the imagery. Other embodiments obtain salient point data for an object through use of other technologies (e.g., NFC chips). The salient point data enables the device to interact with the object in a spatially-dependent manner. Many other features and arrangements are also detailed.
    Type: Grant
    Filed: September 29, 2014
    Date of Patent: April 14, 2015
    Assignee: Digimarc Corporation
    Inventor: Joshua V. Aller
  • Patent number: 9008437
    Abstract: An information processing apparatus sets a plurality of reference locations of data in information as one reference location pattern and acquires a feature amount obtained from a value of data of one of the plurality of pieces of reference information in one reference location pattern for each of a plurality of reference location patterns and the plurality of pieces of reference information. The apparatus extracts data included in the input information according to each of the plurality of reference location patterns, selects the reference location pattern for classification of the input information from the plurality of reference location patterns based on a value of data included in the extracted input information, and executes classification of the input information by using the feature amount in the selected reference location pattern and data included in the input information at a reference location indicated by the reference location pattern.
    Type: Grant
    Filed: November 26, 2012
    Date of Patent: April 14, 2015
    Assignee: Canon Kabushiki Kaisha
    Inventors: Yasuhiro Okuno, Katsuhiko Mori
  • Publication number: 20150097859
    Abstract: A method for collocating a clothing accessory on a human body for an electronic apparatus, is provided. A human body picture is shown on a display unit, and a human body description file corresponding to a human body model included in the human body picture is obtained from a database. A clothing accessory picture is obtained from another database based on a user choice, and a clothing accessory description file corresponding to a clothing accessory model included in the clothing accessory picture. The object picture is superposed on the human body picture automatically according to the human body and the object description files.
    Type: Application
    Filed: October 6, 2014
    Publication date: April 9, 2015
    Inventor: Chung-Jen Huang
  • Patent number: 9001154
    Abstract: A method for representing virtual information in a view of a real environment comprises providing a virtual object having a global position and orientation with respect to a geographic global coordinate system, with first pose data on the global position and orientation of the virtual object, in a database of a server, taking an image of a real environment by a mobile device and providing second pose data as to at which position and with which orientation with respect to the geographic global coordinate system the image was taken. The method further includes displaying the image on a display of the mobile device, accessing the virtual object in the database and positioning the virtual object in the image on the basis of the first and second pose data, manipulating the virtual object or adding a further virtual object, and providing the manipulated virtual object with modified first pose data or the further virtual object with third pose data in the database.
    Type: Grant
    Filed: October 11, 2010
    Date of Patent: April 7, 2015
    Assignee: Metaio GmbH
    Inventors: Peter Meier, Michael Kuhn, Frank Angermann