Placing Generated Data In Real Scene Patents (Class 345/632)
  • Patent number: 10510276
    Abstract: An apparatus for controlling a display of a vehicle includes a camera obtaining a face image of a driver, a sensor sensing a location of a seat on which the driver is seated, and a controller. The controller is configured to determine a location of an eye of the driver based on the face image and the location of the seat and to correct a projection location of a virtual image projected onto a display device based on the location of the eye. The apparatus allows the virtual image to be accurately matched with a road by changing the projection location of a virtual image depending on the height of a driver's eye, thereby providing the driver with an undistorted image.
    Type: Grant
    Filed: November 26, 2018
    Date of Patent: December 17, 2019
    Assignees: HYUNDAI MOTOR COMPANY, KIA MOTORS CORPORATION
    Inventor: Hyung Seok Lee
  • Patent number: 10508925
    Abstract: A network system, such as a transport management system, selects a pickup location for a trip and navigates a rider to the selected pickup location using augmented reality (AR). Responsive to receiving a trip request including an origin location, a pickup location selection module selects candidate pickup locations within a threshold distance of the rider client device. The pickup location selection module filters and ranks the candidates based on historical service data and location characteristics associated with the origin location as well as any history of pickups of the rider at the origin location and data from the trip request. The top-ranked candidate is selected as the pickup location and sent to the rider and driver client devices. An AR navigation module instructs the rider client device to visually augment a live video stream with computer-generated AR elements to navigate the rider from a current location to the pickup location.
    Type: Grant
    Filed: August 31, 2017
    Date of Patent: December 17, 2019
    Assignee: Uber Technologies, Inc.
    Inventors: John Badalamenti, Joshua Inch, Christopher Michael Sanchez, Theodore Russell Sumers
  • Patent number: 10504294
    Abstract: A method is provided for providing target object information to a mobile interface device user in a dynamic structural environment. The method includes receiving from the mobile interface device a request for target object information associated with a target object in the dynamic structural environment. A pose of the mobile interface device relative to the target object is determined accounting for spatial differences in the environment coordinate system resulting from changes in the dynamic structure. The method also includes assembling AR target object information for transmission to and display on the mobile interface device and transmitting the AR target object information to the mobile interface device for display in conjunction with a real-time view of the target object.
    Type: Grant
    Filed: November 26, 2018
    Date of Patent: December 10, 2019
    Assignee: Huntington Ingalls Incorporated
    Inventors: Brian Bare, Jonathan Martin, Patrick Ryan, Mark Lawrence, Paul Sells
  • Patent number: 10503269
    Abstract: A system for performing a pinch and hold gesture is described. The system includes a head-mounted display (HMD) and a glove, which is worn by a hand of the user. Each finger segment of the glove includes a sensor for detecting positions of the finger segment when moved by the hand. The system includes a computing device interfaced with the HMD and the glove. The computing device analyzes data from the sensors of the finger segments to determine that a pinch and hold gesture is performed by at least two of the finger segments. Moreover, the computing device generates image data that is communicated to the HMD, such that a scene rendered on the HMD is modified to render a visual cue indicative of a location in the scene at which the pinch and hold gesture is associated.
    Type: Grant
    Filed: November 27, 2018
    Date of Patent: December 10, 2019
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Jeffrey Roger Stafford, Richard L. Marks
  • Patent number: 10475113
    Abstract: Techniques for generated and presenting images of items within user selected context images are presented herein. In an example embodiment, an access module can be configured to receive a first environment model and a first wearable item model. A simulation module coupled to the access module may process the environment model to identify placement volumes within the environment model and to place a clothed body model within the placement volume to generate a context model. A rendering module may then generate a context image from the context model. In various embodiments, the environment model used for the context, the wearable item positioned within the environment model, and rendering values used to generate context images may be changed in response to user inputs to generate new context images that are displayed to a user.
    Type: Grant
    Filed: December 23, 2014
    Date of Patent: November 12, 2019
    Assignee: eBay Inc.
    Inventors: Mihir Naware, Jatin Chhugani, Jonathan Su
  • Patent number: 10463316
    Abstract: A medical information processing system includes setting circuitry, generating circuitry and output control circuitry. The setting circuitry is configured to set a region of interest in a three-dimensional image that is generated by emitting X-rays to a breast of a subject and imaging the breast from different directions. The generating circuitry is configured to generate reference information in which positional information about the region of interest is associated with a schematic diagram of the breast. The output control circuitry is configured to cause output circuitry to output the reference information.
    Type: Grant
    Filed: September 11, 2017
    Date of Patent: November 5, 2019
    Assignee: Canon Medical Systems Corporation
    Inventors: Atsuko Sugiyama, Mariko Shibata, Yoshimasa Kobayashi, Kei Mori, Koichi Terai, Toshie Maruyama
  • Patent number: 10460457
    Abstract: Image parameters of an overlay image may be adjusted based on image parameters of an optical image displayed in a surgical microscope. The overlay image may then be displayed with the optical image to a user of the surgical microscope.
    Type: Grant
    Filed: June 13, 2017
    Date of Patent: October 29, 2019
    Assignee: Novartis AG
    Inventor: Tammo Heeren
  • Patent number: 10460465
    Abstract: A computer system generates an outline of a roof of a structure based on a set of lateral images depicting the structure. For each image in the set of lateral images, one or more rooflines corresponding to the roof of the structure are determined. The computer system determines how the rooflines connect to one another. Based on the determination, the rooflines are connected to generate an outline of the roof.
    Type: Grant
    Filed: August 31, 2017
    Date of Patent: October 29, 2019
    Assignee: Hover Inc.
    Inventors: Ajay Mishra, William Castillo, A. J. Altman, Manish Upendran
  • Patent number: 10460168
    Abstract: Methods for improving data accuracy in a positioning system database using aerial imagery are provided. In one aspect, a method includes obtaining a two-dimensional image providing an aerial perspective of a building including a business, and receiving from a user a path identifier indicative of a virtual roadway proximal to the building, a first identifier in the first two-dimensional image of a first edge, second edge, and entrance of the business relative to the building. The method also includes determining first, second, and third latitude and longitude pairings on a map for the first identifier, second identifier, and third identifier, respectively, and providing, for display in the image, an identification of the first edge, second edge, and entrance of the business. Systems and machine-readable media are also provided.
    Type: Grant
    Filed: July 15, 2016
    Date of Patent: October 29, 2019
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Wesley Boyer, Michael Chasen, Victor Quinn
  • Patent number: 10430867
    Abstract: Embodiments disclosed herein include virtual apparel fitting systems configured to perform methods comprising generating a first virtual garment carousel the includes images of garments. In operation, a user scrolling through the virtual garment carousel causes a graphical user interface to display images of the garments in the carousel superposed over an image of the user, thereby enabling the user to see how the garments would look on him or her, where virtual fit points of each garment image align with virtual fit points on the image of the user.
    Type: Grant
    Filed: August 23, 2017
    Date of Patent: October 1, 2019
    Assignee: SelfieStyler, Inc.
    Inventors: Kyle Mitchell, Julianne Applegate, Muhammad Ibrahim, Waqas Muddasir, Jeff Portaro, Dustin Ledo
  • Patent number: 10430987
    Abstract: Various embodiments provide for systems, methods, and computer-readable storage media for annotating a digital image with a texture fill. An annotation system may receive a user input defining a border separating a first portion of a target digital image from a second portion of the target digital image. The annotation system may then generate a contour mask, such as a binary mask, for the target digital image based on the user-defined border. The annotation system may then apply a media overlay to the target image based on the contour mask. In particular, the contour mask can define portions of the target digital image such that the annotation system will apply a media overlay to at least one of those portions while not applying the media overlay to remaining portions of the target digital image.
    Type: Grant
    Filed: December 21, 2017
    Date of Patent: October 1, 2019
    Assignee: Snap Inc.
    Inventors: Nan Hu, Xing Mei, Chongyang Ma, Kun Duan
  • Patent number: 10373333
    Abstract: An interactive clothes and accessories fitting method, a display system and a computer-readable recording medium thereof are provided, where the method includes the following steps. While the user is wearing a first apparel, images of the user are continuously captured by using an image capturing device to generate a first image sequence, wherein each first image that forms the first image sequence respectively corresponds to a different pose of the user. While the user is wearing a second apparel, images of the user are continuously captured by using the image capturing device. When a second comparison image corresponding to a specific pose of the user is captured by the image capturing device, a first comparison image corresponding to the specific pose is searched from the first image sequence, and the first comparison image and the second comparison image are simultaneously displayed on a screen.
    Type: Grant
    Filed: March 14, 2017
    Date of Patent: August 6, 2019
    Assignee: Wistron Corporation
    Inventors: Jie-Ci Yang, Meng-Chao Kao, Ting-Wei Lin, Hui-Chen Lin, Yu-Ting Li
  • Patent number: 10372226
    Abstract: Embodiments of the invention recognize human visual gestures, as captured by image and video sensors, to develop a visual language for a variety of human computer interfaces. One embodiment provides a method for recognizing a hand gesture positioned by a user hand. The method includes steps of capturing a digital color image of a user hand against a background, applying a general parametric model to the digital color image of the user hand to generate a specific parametric template of the user hand, receiving a second digital image of the user hand positioned to represent a hand gesture, detecting a hand contour of the hand gesture based at least in part on the specific parametric template of the user hand, and recognizing the hand gesture based at least in part on the detected hand contour. Other embodiments include recognizing hand gestures, facial gestures or body gestures captured in a video.
    Type: Grant
    Filed: November 8, 2016
    Date of Patent: August 6, 2019
    Assignee: FASTVDO LLC
    Inventors: Wei Dai, Madhu Peringassery Krishnan, Pankaj Topiwala
  • Patent number: 10319344
    Abstract: A system consisting of a plurality of terminals storing image data and an image displaying device which are connected with each other through a network capable of two-way communication has the disadvantage of decreasing in throughput on account of slow image data transfer over the network. The image displaying device performs two-way communications with each of the terminals by a communication means at the display device side. Also, the image displaying device acquires image data from a relevant terminal while instructing other terminals to suspend transmission by the image data acquisition controlling means, thereby suspending transmission of image data. The image displaying device displays images by the image displaying means based on the thus acquired image data.
    Type: Grant
    Filed: February 28, 2014
    Date of Patent: June 11, 2019
    Assignee: SEIKO EPSON CORPORATION
    Inventors: Minoru Sato, Shinji Kubota, Tomohiro Nomizo
  • Patent number: 10282903
    Abstract: The program matches a VR environment of a user with an optimal physical location. The method loads a VR program, and detects at least one location-based goal of the user based on a virtual environment of the VR program. The method determines if a current location of the user is an optimal physical location for the at least one location-based goal of the user, based on a plurality of determined criteria associated with the current location and the determined virtual environment associated with the VR program. The method searches for an optimal physical location based on at least one location-based goal of the user and matches the optimal physical location of the user with the at least one location-based goal of the user in the VR program based on the plurality of determined criteria associated with the optimal physical location and the determined virtual environment associated with the VR program.
    Type: Grant
    Filed: November 16, 2017
    Date of Patent: May 7, 2019
    Assignee: International Business Machines Corporation
    Inventors: Adam T. Clark, Jeffrey K. Huebert, Aspen L. Payton, John E. Petri
  • Patent number: 10255703
    Abstract: A system and method for generating an original image are provided. In example embodiments, a user may select a category with a plurality of images. Common image attributes from the plurality of images within the user selected category is identified. A base image using the plurality of images associated with the identified common image attributes is generated. An original image is generated by varying attributes within the base image.
    Type: Grant
    Filed: December 18, 2015
    Date of Patent: April 9, 2019
    Assignee: eBay Inc.
    Inventor: Sergio Pinzon Gonzales, Jr.
  • Patent number: 10237958
    Abstract: A decision support unit for an outdoor lighting network (100) is disclosed. The outdoor lighting network includes a plurality of lighting units (LU1-LU8) grouped in a plurality of zones (30, 31). The decision support unit includes a controller (20) including safety monitor module (23) arranged to determine a safety factor for each of the plurality of zones (30, 31). The safety factor for each the plurality of zones 30, 31 is determined using at least one factor that represents an aspect that contributes to an assessment of each of the plurality of zones' safety. The decision support unit may also include a decision module (24) that determines a repair priority of a faulty lighting unit (LU1-LU8) using the safety factors determined by the safety monitor module (23).
    Type: Grant
    Filed: April 21, 2014
    Date of Patent: March 19, 2019
    Assignee: PHILIPS LIGHTING HOLDING B.V.
    Inventors: Leszek Holenderski, Ingrid Christina Maria Flinsenberg, Alexandre Georgievich Sinitsyn
  • Patent number: 10236971
    Abstract: To provide a technique capable of reducing the transmission delay of an image, a communication apparatus wirelessly communicable with another communication apparatus, receives a compressed captured image from the other communication apparatus; decompresses the compressed captured image to obtain a decompressed captured image; generates, based on the decompressed captured image, a CG image to be superimposed and displayed on the captured image; and compresses the CG image to generate a compressed CG image; transmits the compressed CG image to the other communication apparatus. Then, the communication apparatus derives a delay time based on a transmission time of the compressed captured image in the other communication apparatus and a generation completion time of the CG image; and changes a compression ratio of the CG image based on whether the delay time is longer than a predetermined threshold.
    Type: Grant
    Filed: June 3, 2016
    Date of Patent: March 19, 2019
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Yutaka Ui, Hiroichi Yamaguchi, Kazuki Takemoto
  • Patent number: 10230779
    Abstract: There is provided a content provision system including an information processing apparatus and a terminal device wherein the information processing apparatus provides a content that includes images respectively captured at discrete capturing locations to the terminal device, the information processing apparatus comprising: a retrieving unit configured to retrieve the content and a position information item associated with the content; and a transmission unit configured to transmit the content and the position information item to the terminal device; the terminal device comprising: a reception unit configured to receive the content and the position information item; and a content reproduction unit configured to reproduce the content by selecting an image to be displayed and displaying a partial area of the selected image, the displayed partial area being extracted from the selected image to show a view in a direction of the designated position for at least one of the images.
    Type: Grant
    Filed: April 22, 2016
    Date of Patent: March 12, 2019
    Assignee: Ricoh Company, Ltd.
    Inventor: Kei Kushimoto
  • Patent number: 10213274
    Abstract: The method of tracking and navigation for a dental instrument uses feature extraction, a feature space transformation and a fusion procedure to detect the location of a target, such as a marker placed on a patient's jaw, as well as detecting the location of a dental instrument with respect to the target for guiding a dental practitioner during a procedure. Detection is performed by identifying potential locations for the target and then refining the potential locations based on information from previous detection frames. Given an initial estimate of the target's three-dimensional location, the estimate is improved through iteratively updated information.
    Type: Grant
    Filed: March 12, 2018
    Date of Patent: February 26, 2019
    Assignee: King Saud University
    Inventors: Tariq Abdulrahman Alshawi, Asma'A Abdurrahman Al-Ekrish, Saleh Abdullah Alshebeili
  • Patent number: 10198855
    Abstract: A method of providing a virtual experience to a user includes identifying a plurality of virtual objects. The method further includes detecting a position of a part of the user's body other than the user's head. The method further includes detecting a reference line of sight of the user. The method further includes setting an extension direction for a first virtual object of the plurality of virtual objects based on a direction of the reference line of sight. The method further includes setting a region for a first virtual object of the plurality of virtual objects, wherein the region comprises a part extending in the extension direction. The method further includes determining whether the first virtual object and a virtual representation of the part of the body have touched based on a positional relationship between the region and a position of the virtual representation of the part of the body.
    Type: Grant
    Filed: July 19, 2017
    Date of Patent: February 5, 2019
    Assignee: COLOPL, INC.
    Inventor: Shuhei Terahata
  • Patent number: 10188938
    Abstract: A method, apparatus and non-transitory computer readable medium that, in one embodiment, interprets a user motion sequence comprises beginning a session, capturing the user motion sequence via a motion capturing device during the session, processing, via a processor, the user motion sequence into a predetermined data format, comparing the processed user motion sequence to at least one predetermined motion sequence stored in a database and determining whether to perform at least one of interpreting the user motion sequence as a universal command and registering the user motion sequence as a new command.
    Type: Grant
    Filed: October 15, 2017
    Date of Patent: January 29, 2019
    Assignee: Open Invention Network LLC
    Inventor: Carey Leigh Lotzer
  • Patent number: 10168794
    Abstract: Embodiments of the invention recognize human visual gestures, as captured by image and video sensors, to develop a visual language for a variety of human computer interfaces. One embodiment of the invention provides a computer-implement method for recognizing a visual gesture portrayed by a part of human body such as a human hand, face or body. The method includes steps of receiving the visual signature captured in a video having multiple video frames, determining a gesture recognition type from multiple gesture recognition types including shaped-based gesture, position-based gesture, motion-assisted and mixed gesture that combining two different gesture types. The method further includes steps of selecting a visual gesture recognition process based on the determined gesture type and applying the selected visual gesture recognition process to the multiple video frames capturing the visual gesture to recognize the visual gesture.
    Type: Grant
    Filed: November 27, 2017
    Date of Patent: January 1, 2019
    Assignee: FASTVDO LLC
    Inventors: Wei Dai, Madhu Peringassery Krishnan, Pankaj Topiwala
  • Patent number: 10140079
    Abstract: Aspects of the present disclosure relate to shadowing objects displayed in head worn computing. A method includes capturing an image of an environment in proximity to a person, analyzing the image to determine a position of each of a plurality of light sources collectively producing a naturally formed shadow in the environment, wherein the naturally formed shadow comprises multiple shadows cast from an individual object in the environment, each of the multiple shadows formed from light traveling from a position of one of the plurality of light sources to the individual object, and displaying a computer-generated object in association with a computer generated shadow, wherein the computer-generated shadow appears as though produced by light striking the computer generated object from the position of a dominant one of the plurality of light sources.
    Type: Grant
    Filed: January 24, 2018
    Date of Patent: November 27, 2018
    Assignee: Osterhout Group, Inc.
    Inventors: Robert Michael Lohse, Edward H. Nortrup
  • Patent number: 10134084
    Abstract: A method of facilitating an augmented reality experience to purchase an item at a merchant location may be provided. The method may include storing profile data, receiving location data and environmental data from a computing device associated with the stored profile data. Upon determining that the user device has entered a predefined merchant location, the method may include initiating a sequence of augmented reality modes including at least a first augmented reality mode associated with the selection of an item and a second augmented reality mode associated with the payment of the item. The user device may display virtual content in association with each mode, and upon detecting predetermined user inputs such as gestures, fixed gazes, or moving through thresholds, the system may enable the selection and payment of one or more items by sending a purchase request to a merchant terminal.
    Type: Grant
    Filed: November 17, 2017
    Date of Patent: November 20, 2018
    Assignee: CAPITAL ONE SERVICES, LLC
    Inventors: David Gabriele, Justin Smith, Damaris Kroeber
  • Patent number: 10134193
    Abstract: Disclosed is a smart mirror system for hairstyling using virtual reality, the smart mirror system including: a mirror display provided with a camera and an angle adjusting means, the mirror display being provided on a wall of a hair salon; a chair rotatably provided in front of the mirror display; and a smart device for being mirrored with the mirror display, such that a user uses the mirror display by manipulating the smart device, wherein the smart device is configured to allow hairstyles that match sex and an age group provided by using an app or a server or hairstyles of celebrities provided by Internet search to be displayed on the mirror display by mirroring; and when one of the hairstyles is selected, a selected hairstyle is applied to an image of the user displayed on the mirror display, thereby being three-dimensionally displayed in response to a user's movement.
    Type: Grant
    Filed: December 19, 2016
    Date of Patent: November 20, 2018
    Assignee: ELI VISION CO., LTD
    Inventor: Deog Geun Ahn
  • Patent number: 10127730
    Abstract: An augmented reality and virtual reality location-based attraction simulation playback and creation system that simulates past attractions and preserves present attractions as location-based augmented reality and virtual reality simulations and processes for simulating past attractions and preserving present attractions as location-based augmented reality and virtual reality simulations of the attractions are disclosed.
    Type: Grant
    Filed: August 29, 2017
    Date of Patent: November 13, 2018
    Inventor: Jason Kristopher Huddy
  • Patent number: 10107747
    Abstract: The invention refers to a method for analyzing reflected light of an object. First a light angle distribution for a point of the object is determined. Then a plenoptic projector is controlled to illuminate the point of the object with the determined light angle distribution. Then the reflected light intensity of the point of the object is measured and the measured reflected light is analyzed in dependence of the determined light angle distribution.
    Type: Grant
    Filed: May 31, 2013
    Date of Patent: October 23, 2018
    Assignee: Ecole Polytechnique Federale de Lausanne (EPFL)
    Inventors: Loic A. Baboulaz, Martin Vetterli, Paolo Prandoni
  • Patent number: 10105601
    Abstract: Systems and methods for rendering of a virtual content object in an augmented reality environment based on a physical marker are discussed herein. Virtual content objects may be rendered by a display device in an augmented reality environment based on the field of view seen through the display device and a position of a marker (and one or more linkage points associated with the marker) in the real world. When rendered in the augmented reality environment, the virtual content objects may be visualized from any angle, from the exterior or interior of the object, and manipulated in response to user input. Virtual content objects and/or user visualizations of virtual content objects may be shared with other users (local and/or remote), enabling multiple users to potentially build, modify, and/or interact with a virtual content object simultaneously and/or cooperatively.
    Type: Grant
    Filed: October 27, 2017
    Date of Patent: October 23, 2018
    Inventor: Nicholas T. Hariton
  • Patent number: 10102627
    Abstract: A head-mounted display device includes an image display unit cause a user to visually recognize image light as a virtual image on the basis of image data and cause the user to visually recognize an outside scene in a state in which the image display unit is worn on the head of the user, an image pickup unit configured to pick up an image of the outside scene, and a control unit configured to cause, when a mark image, which is an image of a specific mark, is included in the picked-up image, using the image display unit, the user to visually recognize a specific virtual image associated with a combination of a kind of the mark image and a shape of the mark image.
    Type: Grant
    Filed: May 15, 2015
    Date of Patent: October 16, 2018
    Assignee: SEIKO EPSON CORPORATION
    Inventors: Atsunari Tsuda, Masahide Takano, Toshikazu Uchiyama, Hitomi Wakamiya
  • Patent number: 10086760
    Abstract: An image processing device according to an aspect of the embodiment includes an image generating unit that generates an image at a virtual viewpoint based on a captured image of an image capturing unit, and an image processing unit that generates an image in which an image for synthesis is displayed on the image at the virtual viewpoint. The image processing unit performs a process for decreasing visibility of the image for synthesis when the image at the virtual viewpoint is an image at a viewpoint location of the virtual viewpoint while the viewpoint location is moving.
    Type: Grant
    Filed: April 5, 2016
    Date of Patent: October 2, 2018
    Assignee: DENSO TEN Limited
    Inventors: Takayuki Ozasa, Teruhiko Kamibayashi, Kohji Ohnishi, Takeo Matsumoto, Tomoyuki Fujimoto, Daisuke Yamamoto
  • Patent number: 10062210
    Abstract: Methods, systems, computer-readable media, and apparatuses for radiance transfer sampling for augmented reality are presented. In some embodiments, a method includes receiving at least one video frame of an environment. The method further includes generating a surface reconstruction of the environment. The method additionally includes projecting a plurality of rays within the surface reconstruction of the environment. Upon projecting a plurality of rays within the surface reconstruction of the environment, the method includes generating illumination data of the environment from the at least one video frame. The method also includes determining a subset of rays from the plurality of rays in the environment based on areas within the environment needing refinement. The method further includes rendering the virtual object over the video frames based on the plurality of rays excluding the subset of rays.
    Type: Grant
    Filed: February 25, 2014
    Date of Patent: August 28, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Lukas Gruber, Jr., Dieter Schmalstieg
  • Patent number: 10046232
    Abstract: A method includes acquiring media content of a wagering game table at a wagering game establishment with a camera of a mobile device. A location of the mobile device is determined when the media content is acquired. A direction that a lens of the camera is facing when the media content is acquired is determined. The wagering game table is identified based on the location and the direction. Overlay imagery derived from wagering game activity of the wagering game table is downloaded into the mobile device from a server. The overlay imagery is composited onto the media content to create a composited media content. The composited media content is displayed on a display of the mobile device.
    Type: Grant
    Filed: February 15, 2017
    Date of Patent: August 14, 2018
    Assignee: BALLY GAMING, INC.
    Inventors: Dion K. Aoki, Mark B. Gagner, Sean P. Kelly, Nickey C. Shin
  • Patent number: 10026176
    Abstract: Features are disclosed for an automatic segmentation and alignment of images for display via an interface. The images may have different scales and lengths. As such, items shown in the images, such as clothing, may not be depicted in a uniform way. Segmentation of the images into image portions where a portion of an image shows a specific item is described. The segmentation may be achieved using models and/or complex image analysis. To provide a realistic view of the subject when image segments are presented together on an interface, additional alignment of the image segments may be performed. The alignment may be achieved using models and/or complex image analysis.
    Type: Grant
    Filed: March 8, 2016
    Date of Patent: July 17, 2018
    Assignee: Amazon Technologies, Inc.
    Inventors: Alexander Adrian Hugh Davidson, Charles Shearer Dorner, Douglas J. Gradt, Jongwoo Lee, Eva Manolis, Maggie McDowell, Ning Yao
  • Patent number: 9997140
    Abstract: A control method executed by a processor included in an information processing device includes receiving a content; extracting one or more keywords of the content from the content; acquiring information in which an identifier identifying a target object, image information on the target object, and one or more keywords of the target object are associated; calculating a position of the target object on a camera image, based on the image information, when the target object is included in the camera image; calculating a display position of an image relating to the content, based on the position of the target object, when the one or more keywords of the content and the one or more keywords of the target object have a correspondence relationship; and displaying the image relating to the content on a screen, in a state of being superimposed onto the camera image, at the display position.
    Type: Grant
    Filed: October 15, 2015
    Date of Patent: June 12, 2018
    Assignee: FUJITSU LIMITED
    Inventors: Takuya Sakamoto, Hidenobu Ito, Kazuaki Nimura
  • Patent number: 9990034
    Abstract: Disclosed is a transparent display device. A transparent display device according to one embodiment displays a predetermined image through a transparent display panel and senses the direction of the line of sight of a person gazing at the transparent display panel, thereby displaying detailed information of an object on the transparent display panel if the line of sight of the person is directed toward the object positioned on the rear surface of the transparent display panel.
    Type: Grant
    Filed: November 15, 2013
    Date of Patent: June 5, 2018
    Assignee: LG ELECTRONICS INC.
    Inventor: Gunho Lee
  • Patent number: 9983684
    Abstract: Methods and devices for displaying a virtual affordance with a virtual target are disclosed. In one example, the virtual target is displayed to a user via a display device. The user's point of gaze is determined to be at a gaze location within a target zone including the virtual target. The user's hand is determined to be at a hand location within a designated tracking volume. Based on at least determining that the user's gaze is at the gaze location and the user's hand is at the hand location, the virtual affordance is displayed at a landing location corresponding to the virtual target, where the landing location is independent of both the gaze location and the user's hand location. Movement of the user's hand is tracked and the virtual affordance is modified in response to the movement.
    Type: Grant
    Filed: November 2, 2016
    Date of Patent: May 29, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Jia Wang, Yasaman Sheri, Julia Schwarz, David J. Calabrese, Daniel B. Witriol
  • Patent number: 9956915
    Abstract: A dump truck having a periphery monitoring apparatus is adapted to display an image of surroundings of the dump truck on a display apparatus. A rearward camera is attached to a vehicle body of the dump truck underneath a vessel of the dump truck and positioned forward of a rear edge portion of the vessel. A rearward camera image captured by the rearward camera includes the rear edge portion of the vessel, an area below the vessel, and an area rearward of the vehicle body. A display control section is configured to display the rear edge portion of the vessel in an upper region of the rearward camera image, to display a ground surface below the vessel in a lower region of the rearward camera image, and to display a vehicle body outer edge line obtained by vertically projecting an outer edge of the vessel onto the ground surface.
    Type: Grant
    Filed: December 28, 2016
    Date of Patent: May 1, 2018
    Assignee: KOMATSU LTD.
    Inventors: Shinji Mitsuta, Shigeru Harada, Tomikazu Tanuki, Eishin Masutani, Yukihiro Nakanishi, Takeshi Kurihara, Dai Tsubone, Masaomi Machida
  • Patent number: 9953438
    Abstract: A system for automated annotation of images and videos points a mobile device towards an object of interest, such as a building or landscape scenery, for the device to display an image of the scene with an annotation for the object. An annotation can include names, historical information, and links to databases of images, videos, and audio files. Different techniques can be used for determining positional placement of annotations, and, by using multiple techniques, positioning can be made more precise and reliable. The level of detail of annotation information can be adjusted according to the precision of the techniques used. A trade-off can be taken into account between precision of annotation and communication cost, delay and/or power consumption. An annotation database can be updated in a self-organizing way. Public information as available on the web can be converted to annotation data.
    Type: Grant
    Filed: February 25, 2011
    Date of Patent: April 24, 2018
    Assignee: Ecole Polytechnic Federale De Lausanne (EPFL)
    Inventors: Luciano Sbaiz, Martin Vetterli
  • Patent number: 9928019
    Abstract: Aspects of the present disclosure relate to shadowing objects displayed in head worn computing.
    Type: Grant
    Filed: February 21, 2014
    Date of Patent: March 27, 2018
    Assignee: Osterhout Group, Inc.
    Inventors: Robert Michael Lohse, Edward H. Nortrup
  • Patent number: 9916681
    Abstract: To integrate a sensory property such as occlusion, shadowing, reflection, etc. among physical and notional (e.g. virtual/augment) visual or other sensory content, providing an appearance of similar occlusion, shadowing, etc. in both models. A reference position, a physical data model representing physical entities, and a notional data model are created or accessed. A first sensory property from either data model is selected. A second sensory property is determined corresponding with the first sensory property, and notional sensory content is generated from the notional data model with the second sensory property applied thereto. The notional sensory content is outputted to the reference position with a see-through display. Consequently, notional entities may appear occluded by physical entities, physical entities may appear to cast shadows from notional light sources, etc.
    Type: Grant
    Filed: October 31, 2015
    Date of Patent: March 13, 2018
    Assignee: Atheer, Inc.
    Inventors: Greg James, Allen Yang Yang, Sleiman Itani
  • Patent number: 9898871
    Abstract: This disclosure relates to system and methods for providing augmented reality experience based on a relative position of objects. Augmented reality experience based on a relative position of object may be provided by detecting a first object and a second object. Positions and orientations of the first object and the second object may be determined. A first visual effect may be determined for the first object and a second visual effect may be determined for the second object. Overlay positions and orientations for the first visual effect and the second visual effect may be determined. An overlay image including the first visual effect and the second visual effect may be determined, and the overlay image may be displayed. An interaction between the first visual effect and the second visual effect may be determined based on the relative position of the first object and the second object.
    Type: Grant
    Filed: October 5, 2016
    Date of Patent: February 20, 2018
    Assignee: Disney Enterprises, Inc.
    Inventors: Richard Reagan, Sagar Mistry, Nathan Allison
  • Patent number: 9898867
    Abstract: A method for providing information associated with a lift process to a mobile device user is presented. The method comprises receiving a request for lift environment information from a mobile device, determining a pose of the mobile interface device relative to a lift process target area, and obtaining lift environment information for at least a portion of the lift process target area. The lift environment information is used to assemble AR lift information for transmission to and display on the mobile interface device. The AR lift information is configured for viewing in conjunction with a real-time view of the lift process target area captured by the mobile interface device. The AR lift information is then transmitted to the mobile interface device for display.
    Type: Grant
    Filed: July 15, 2015
    Date of Patent: February 20, 2018
    Assignee: Huntington Ingalls Incorporated
    Inventors: Brian Bare, Jonathan Martin, Patrick Ryan, Paul Sells, Mark Lawrence
  • Patent number: 9892563
    Abstract: A system and method for generating a mixed-reality environment is provided. The system and method provides a user-worn sub-system communicatively connected to a synthetic object computer module. The user-worn sub-system may utilize a plurality of user-worn sensors to capture and process data regarding a user's pose and location. The synthetic object computer module may generate and provide to the user-worn sub-system synthetic objects based information defining a user's real world life scene or environment indicating a user's pose and location. The synthetic objects may then be rendered on a user-worn display, thereby inserting the synthetic objects into a user's field of view. Rendering the synthetic objects on the user-worn display creates the virtual effect for the user that the synthetic objects are present in the real world.
    Type: Grant
    Filed: March 21, 2017
    Date of Patent: February 13, 2018
    Assignee: SRI International
    Inventors: Rakesh Kumar, Taragay Oskiper, Oleg Naroditsky, Supun Samarasekera, Zhiwei Zhu, Janet Yonga Kim Knowles
  • Patent number: 9875566
    Abstract: At a first device there is received from a second device (i) a native pixilated image and (ii) interactive filter data associated with the image. The filter data corresponds to an interactive filter applied to the image. A first representation of the image is displayed in accordance with the interactive filter data on the display. All or a first subset of the pixels of the image are obscured in the first representation. Responsive to user input, for a period of time specified by the filter, a second representation of the image is displayed in place of the first representation. None or a second subset of the pixels of the image is obscured in the second representation, where the second subset is less than the first subset. Then there is displayed on the display, after the limited period of time has elapsed, the first representation in place of the second representation.
    Type: Grant
    Filed: December 30, 2015
    Date of Patent: January 23, 2018
    Assignee: Glu Mobile, Inc.
    Inventors: Sourabh Ahuja, Liang Wu, Michael Mok, Lian A. Amaris
  • Patent number: 9870644
    Abstract: Image processing apparatus and method. A modeling unit may generate a three-dimensional (3D) model corresponding to an actual object from an input color image and an input depth image corresponding to the input color image. A calculator may perform photon-based rendering with respect to an input virtual object and the 3D model based on input light environment information, and may generate a difference image comprising color distortion information occurring by inserting the virtual object into the 3D model. A rendering unit may generate a result image comprising the virtual object by synthesizing the difference image and the input color image.
    Type: Grant
    Filed: June 13, 2012
    Date of Patent: January 16, 2018
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: In Woo Ha, Tae Hyun Rhee
  • Patent number: 9860453
    Abstract: Methods and systems for estimating HDR sky light probes for outdoor images are disclosed. A precaptured sky light probe database is leveraged. The database includes a plurality of HDR sky light probes captured under a plurality of different illumination conditions. A HDR sky light probe is estimated from an outdoor image by fitting a three dimensional model to an object of interest in the image and solving an inverse optimization lighting problem for the 3D model where the space of possible HDR sky light probes is constrained by the HDR sky light probes of the database.
    Type: Grant
    Filed: February 4, 2015
    Date of Patent: January 2, 2018
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Iain Matthews, Jean-Francois Lalonde
  • Patent number: 9843736
    Abstract: Certain aspects of the technology disclosed herein integrate a camera with an electronic display. An electronic display includes several layers, such as a cover layer, a color filter layer, a display layer including light emitting diodes or organic light emitting diodes, a thin film transistor layer, etc. A processor initiates light emission from a plurality of display elements. The processor suspends the light emission from the plurality of display elements for a period of time imperceptible to a human observer. The processor initiates a camera to capture an image during the period of time the plurality of display elements are suspended. The processor can capture a plurality of images corresponding to a plurality of pixels and produce an image comprising depth information.
    Type: Grant
    Filed: February 27, 2017
    Date of Patent: December 12, 2017
    Assignee: ESSENTIAL PRODUCTS, INC.
    Inventors: David John Evans, V, Xinrui Jiang, Andrew E. Rubin, Matthew Hershenson, Xiaoyu Miao, Joseph Anthony Tate, Jason Sean Gagne-Keats
  • Patent number: 9826164
    Abstract: Disclosed is a marine environment display device for georeferencing an image stream of a marine environment captured by a camera. The marine environment display device may comprise an image receiver configured to receive an image of the image stream from the camera, a location receiver configured to receive an object location of an object, an image generator configured to generate, from the image, a projected image having location information associated with each of a plurality of points on the projected image and corresponding to a camera position and field of view of the camera, an object generator configured to generate an object indicator at a position on the projected image based on the object location and the location information, and a display configured to display the projected image and the object indicator at the position on the projected image.
    Type: Grant
    Filed: May 30, 2014
    Date of Patent: November 21, 2017
    Assignee: Furuno Electric Co., Ltd.
    Inventors: Brice Pryszo, Iker Pryszo, Mathieu Jacquinot
  • Patent number: 9823824
    Abstract: A user interface control for an image product creation application is used for adding user supplied text or graphic elements to an image product, wherein the user interface control is responsive to the position relative to a user supplied image, a recognized object within the user supplied image, or an image product related feature, wherein the user interface control provides an indication when the text or graphic elements are positioned proximal to the user supplied image, the recognized object, or the image product related feature, and wherein the user interface control modifies an attribute of the text or graphic elements when placed proximal to the user supplied image, the recognized object, or the image product related feature.
    Type: Grant
    Filed: August 13, 2014
    Date of Patent: November 21, 2017
    Assignee: KODAK ALARIS INC.
    Inventors: Stephen James Pasquarette, Joseph Anthony Manico