Abstract: The disclosure is related to a method and a computer apparatus for processing a border of a computer figure and merging the figure into a background image. In the method, a user interface is provided for the user to operate the computer figure through a touch screen. The computer figure is configured to be merged into a background image, or a specific image object. The computer figure is processed by performing a border-treatment algorithm when it is moved and combined with the background image. In the process of border-treatment, the graphic information of the background image is taken into consideration. The computer figure is allowed to be well merged into the background image by considering the background's graphic information when the images are combined. Therefore, a visual effect of fusing the images can be achieved.
Abstract: A head mounted display (HMD) in a VR system includes sensors for tracking the eyes and face of a user wearing the HMD. The VR system records calibration attributes such as landmarks of the face of the user. Light sources illuminate portions of the user's face covered by the HMD. In conjunction, facial sensors capture facial data. The VR system analyzes the facial data to determine the orientation of planar sections of the illuminated portions of face. The VR system aggregates planar sections of the face and maps the planar sections to landmarks of the face to generate a facial animation of the user, which can also include eye orientation information. The facial animation is represented as a virtual avatar and presented to the user.
Type:
Grant
Filed:
June 3, 2016
Date of Patent:
May 1, 2018
Assignee:
Oculus VR, LLC
Inventors:
Dov Katz, Michael John Toksvig, Ziheng Wang, Timothy Paul Omernick, Torin Ross Herndon
Abstract: A waveguide apparatus includes a planar waveguide and at least one optical diffraction element (DOE) that provides a plurality of optical paths between an exterior and interior of the planar waveguide. A phase profile of the DOE may combine a linear diffraction grating with a circular lens, to shape a wave front and produce beams with desired focus. Waveguide apparati may be assembled to create multiple focal planes. The DOE may have a low diffraction efficiency, and planar waveguides may be transparent when viewed normally, allowing passage of light from an ambient environment (e.g., real world) useful in AR systems. Light may be returned for temporally sequentially passes through the planar waveguide. The DOE(s) may be fixed or may have dynamically adjustable characteristics. An optical coupler system may couple images to the waveguide apparatus from a projector, for instance a biaxially scanning cantilevered optical fiber tip.
Type:
Grant
Filed:
May 5, 2015
Date of Patent:
April 24, 2018
Assignee:
Magic Leap, Inc.
Inventors:
Rony Abovitz, Brian T. Schowengerdt, Mathew D. Watson
Abstract: Technologies for adjusting a perspective of a captured image for display on a mobile computing device include capturing a first image of a user by a first camera and a second image of a real-world environment by a second camera. The mobile computing device determines a position of an eye of the user relative to the mobile computing device based on the first captured image and a distance of an object in the real-world environment from the mobile computing device based on the second captured image. The mobile computing device generates a back projection of the real-world environment captured by the second camera to the display based on the determined distance of the object in the real-world environment relative to the mobile computing device, the determined position of the user's eye relative to the mobile computing device, and at least one device parameter of the mobile computing device.
Abstract: A system and method for generating a dynamic sensor array for an augmented reality system is described. A head mounted device includes one or more sensors, an augmented reality (AR) application, and a sensor array module. The sensor array module identifies available sensors from other head mounted devices that are geographically located within a predefined area. A dynamic sensor array is formed based on the available sensors and the one or more sensors. The dynamic sensor array is updated based on an operational status of the available sensors and the one or more sensors. The AR application generates AR content based on data from the dynamic sensor array. A display of the head mounted device displays the AR content.
Abstract: A system for displaying time-based events on a time line is described. A first timeline unit (1) displays a first timeline showing a first plurality of events within a first time segment (3) bounded by a first begin time and a first end time. A second timeline unit (2) displays a second timeline showing a second plurality of events within a second time segment (4) bounded by a second begin time and a second end time, wherein the first timeline and the second timeline are displayed in the same scale. An interaction unit (5) enables a user to indicate a change to the displayed time segments (3, 4). A time segment updater (6) determines an updated first time segment (3) and an updated second time segment (4) based on the indicated change, keeping the scale of the first timeline equal to the scale of the second timeline, and keeping an offset between the first time segment (3) and the second time segment (4) constant.
Type:
Grant
Filed:
May 22, 2012
Date of Patent:
March 27, 2018
Assignee:
Koninklijke Philips N.V.
Inventors:
Stewart Anderson Higgins, Alexander Adrianus Martinus Verbeek, Eric Christiaan Sluiters
Abstract: A method for processing information in a portable terminal is provided. The method includes displaying a content, displaying a clip area on the content when detecting a clip touch interaction, correcting the clip area by analyzing a pattern and/or information of the clip area, and storing information of corrected clip area in a clipboard.
Type:
Grant
Filed:
July 9, 2014
Date of Patent:
March 20, 2018
Assignee:
Samsung Electronics Co., Ltd.
Inventors:
Heejin Kim, Sihak Jang, Sanghyuk Koh, Bohyun Sim, Hyemi Lee
Abstract: Augmented reality (AR) systems, methods, and instrumentalities are disclosed. A user's gaze point may be estimated and may be used to search for and present information, e.g., information relating to areas on which the user is focusing. The user's gaze point may be used to facilitate or enable modes of interactivity and/or user interfaces that may be controlled by the direction of view of the user. Biometric techniques may be used to estimate an emotional state of the user. This estimated emotional state may be used to be the information that is presented to the user.
Type:
Grant
Filed:
October 10, 2014
Date of Patent:
March 20, 2018
Assignee:
InterDigital Patent Holdings, Inc.
Inventors:
Eduardo Asbun, Yuriy Reznik, Ariela Zeira, Gregory S. Sternberg, Ralph Neff
Abstract: An electronic apparatus comprises a first display and a controller. The controller is configured to switch between a first display mode in which the first display is configured to display a first image and a second display mode in which the first display is configured to display a second image. The controller is configured to cause the first display in the second display mode to superimpose first input information input to the electronic apparatus or an apparatus other than the electronic apparatus onto the second image, and is configured to cause first display to superimpose, in a case where a display mode of the first display is switched from the second display mode to the first display mode, the first input information superimposed on the second image in the second display mode onto an image corresponding to the second image in the first image.
Abstract: A perceived speed of motion associated with a user may be changed by determining a speed and a direction of the motion associated with the user within a reference frame. Augmented reality content may be determined for presentation on a display. A direction and a speed of motion associated with the augmented reality content within the reference frame may be determined. The direction of motion associated with the augmented reality content may oppose the direction of motion associated with the user. The speed of motion associated with the augmented reality content may increase as the speed of motion associated with the user increases. The augmented reality content may be presented such that the augmented reality content appears to move toward and past a position associated with the user at the perceived speed that is faster than the speed of motion associated with the user.
Abstract: An information processing apparatus includes a first acquisition section that acquires image data from each of terminal devices, the image data representing an image displayed on the terminal device, a display control section that displays, based on the acquired image data, a list of the images displayed on the terminal devices, a second acquisition section that acquires from one terminal device of the terminal devices a signal notifying that the image displayed on the one terminal device has been updated, a determination section that determines, in accordance with an acquisition order of the signal from the one terminal device among the terminal devices, the size of the image displayed on the one terminal device when the image is displayed in the list, and an update section that updates the list in accordance with the determined size.
Abstract: In a general aspect, a system can include a leader device. The leader device can include an interface configured to display a plurality of preview images, each preview image corresponding with respective virtual reality (VR) content. The leader device also include a selection device configured to select a preview image of the plurality of preview images and a leader application configured to control presentation of the respective VR content associated with the selected preview image. The system can further include a plurality of participant devices that are operationally coupled with the leader device. Each participant device of the plurality of participant devices can be configured to, responsive to the leader device, display at least one image included in the respective VR content corresponding with the selected preview image.
Type:
Grant
Filed:
May 26, 2016
Date of Patent:
March 6, 2018
Assignee:
Google LLC
Inventors:
Andrey Doronichev, Ben Schrom, Antonio Bernardo Monteiro Costa, Ana Krulec, Jon Bedard, Jennifer Holland, David Louis Bergman Quaid, Yunxin Zheng
Abstract: An augmented reality display system comprises a passable world model data comprises a set of map points corresponding to one or more objects of the real world. The augmented reality system also comprises a processor to communicate with one or more individual augmented reality display systems to pass a portion of the passable world model data to the one or more individual augmented reality display systems, wherein the piece of the passable world model data is passed based at least in part on respective locations corresponding to the one or more individual augmented reality display systems.
Abstract: An apparatus for automatically suggesting information layers in augmented reality may include a processor and memory storing executable computer program code that cause the apparatus to at least perform operations including providing layers of information relating to virtual information corresponding to information indicating a current location of the apparatus. The computer program code may further cause the apparatus to determine that a layer(s) of information is enabled to provide virtual information for display. The virtual information corresponds to locations of real world objects in or proximate to the current location. The computer program code may further cause the apparatus to determine other information layers associates with content for the current location based on the number of items virtual information for the enabled layer being below a threshold and automatically suggest one or more other layers of information for selection.
Abstract: Methods and systems for enabling an augmented reality character to maintain and exhibit awareness of an observer are provided. A portable device held by a user is utilized to capture an image stream of a real environment, and generate an augmented reality image stream which includes a virtual character. The augmented reality image stream is displayed on the portable device to the user. As the user maneuvers the portable device, its position and movement are continuously tracked. The virtual character is configured to demonstrate awareness of the user by, for example, adjusting its gaze so as to look in the direction of the portable device.
Type:
Grant
Filed:
December 8, 2010
Date of Patent:
February 27, 2018
Assignee:
Sony Interactive Entertainment America LLC
Abstract: In a graphics system regions of a frame are analyzed to determine local regions of the frame in which adaptive desampling may be performed. In one implementation a standard sampling scheme includes at least one sample per pixel and regions that are adaptively desampled have one sample for a block of pixels having a size of at least four pixels. A level of detail map is generating to identify regions in which desampling may be performed. The level of detail map may be based on detecting motion, detecting an edge, and detecting a content frequency.
Type:
Grant
Filed:
June 17, 2015
Date of Patent:
February 27, 2018
Assignee:
Samsung Electronics Co., Ltd.
Inventors:
Christopher T. Cheng, Liangjun Zhang, Santosh Abraham, Ki Fung Chow
Abstract: Systems, methods, and computer program products for identifying objects of interest and providing relevant information about the objects of interest using augmented reality devices are disclosed. For example, a computer-implemented method may include identifying an object of interest among a plurality of objects present in an image view, determining real-time information for the object of interest based on the identifying, presenting the determined real-time information for the object of interest as part of the image view, and processing a transaction involving the object of interest based on a user selection associated with the image view.
Abstract: An image processing program for causing a computer to execute: a procedure for obtaining a first image; a procedure for obtaining a second image; a procedure for changing at least a part of the second image in accordance with a signal; and a procedure for processing the first image using the changed second image. The second image is at least one of a handwritten image, a letter image, and a selected image, and the signal is used in a process which splashes the second image in a procedure which changes at least a part of the second image.
Abstract: A method of adjusting a field of view of a user for a head mounted display (HMD) includes defining a virtual camera for specifying an image of the field of view in a virtual space. In response to updating an image of the field of view by changing a position of the virtual camera without synchronization with the movement of the HMD, the position of the virtual camera is moved in a movement direction of a player object to move the player object into a predetermined range of the field of view. The field of view is sectioned into a non-tracking region defined by the predetermined range, a near tracking region, and a far tracking region. A relief region is between the non-tracking region and the far tracking region. Movement of the player object is based on a location of the player object within the field-of view region.
Abstract: A simulated mirrored display is disclosed. A camera captures and transmits an image to an electronic device. The electronic device alters the image to simulate material characteristics such as color, texture, opacity, and reflectivity of a manufactured good to be simulated. Additionally, images of components of the manufactured good may be superimposed on the image and likewise modified. The altered image is displayed on an electronic display.