Abstract: Methods and apparatus relating to techniques for intelligent memory DVFS (Dynamic Voltage and Frequency Scaling) scheme exploiting graphics inter-frame level correlation are described. In an embodiment, collection logic collects bandwidth usage information by a system agent during performance of one or more operations associated with a first graphics workload. Memory stores the collected bandwidth usage information. The selection logic causes selection of an operating frequency for the system agent to perform a plurality of operations associated with one or more graphics workloads based at least on the stored collected bandwidth usage information. The one or more graphics workloads occur after the first graphics workload. Other embodiments are also disclosed and claimed.
Abstract: A virtual reality system includes a head-mounted display (HMD) having one or more facial sensors and illumination sources mounted to a surface of the HMD. For example, the facial sensors are image capture devices coupled to a bottom side of the HMD. The illumination sources illuminate portions of a user's face outside of the HMD, while the facial sensors capture images of the illuminated portions of the user's face. A controller receives the captured images and generates a representation of the portions of the user's face by identifying landmarks of the user's face in the captured images and performing other suitable image processing methods. Based on the representation, the controller or another component of the virtual reality system generates content for presentation to the user.
Type:
Grant
Filed:
April 1, 2016
Date of Patent:
June 16, 2020
Assignee:
Facebook Technologies, LLC
Inventors:
Dov Katz, Michael John Toksvig, Ziheng Wang, Timothy Paul Omernick, Torin Ross Herndon
Abstract: Systems and methods for correspondence estimation and flexible ground modeling include communicating two-dimensional (2D) images of an environment to a correspondence estimation module, including a first image and a second image captured by an image capturing device. First features, including geometric features and semantic features, are hierarchically extract from the first image with a first convolutional neural network (CNN) according to activation map weights, and second features, including geometric features and semantic features, are hierarchically extracted from the second image with a second CNN according to the activation map weights. Correspondences between the first features and the second features are estimated, including hierarchical fusing of geometric correspondences and semantic correspondences. A 3-dimensional (3D) model of a terrain is estimated using the estimated correspondences belonging to the terrain surface.
Type:
Grant
Filed:
July 6, 2018
Date of Patent:
June 9, 2020
Assignee:
NEC Corporation
Inventors:
Quoc-Huy Tran, Mohammed E. F. Salem, Muhammad Zeeshan Zia, Paul Vernaza, Manmohan Chandraker
Abstract: An apparatus for outputting a content to a display, including a communicator configured to perform a data communication with the display; an input interface configured to receive an input content and metadata associated with the input content; and a processor configured to acquire image quality information applied to the content based on the metadata, to convert the input content into a converted content outputtable on the display by using content conversion information related to the acquired image quality information, and to control the communicator to output the converted content to the display.
Abstract: Systems and methods for processing operations for head mounted display (HMD) users to join virtual reality (VR) scenes are provided. A computer-implemented method includes providing a first perspective of a VR scene to a first HMD of a first user and receiving an indication that a second user is requesting to join the VR scene provided to the first HMD. The method further includes obtaining real-world position and orientation data of the second HMD relative to the first HMD and then providing, based on said data, a second perspective of the VR scene. The method also provides that the first and second perspectives are each controlled by respective position and orientation changes while viewing the VR scene.
Type:
Grant
Filed:
January 11, 2017
Date of Patent:
May 19, 2020
Assignee:
Sony Interactive Entertainment Inc.
Inventors:
Steven Osman, Javier Fernandez Rico, Ruxin Chen
Abstract: The technology disclosed relates to user interfaces for controlling augmented reality (AR) or virtual reality (VR) environments. Real and virtual objects can be seamlessly integrated to form an augmented reality by tracking motion of one or more real objects within view of a wearable sensor system. Switching the AR/VR presentation on or off to interact with the real world surrounding them, for example to drink some soda, can be addressed with a convenient mode switching gesture associated with switching between operational modes in a VR/AR enabled device.
Abstract: A system that displays geographic data is disclosed. During operation, the system receives a query to be processed, wherein the query is associated with a set of geographic regions. Next, the system uses a late-binding schema generated from the query to retrieve a set of data points from a set of events containing previously gathered data. Then, for each data point in a set of data points, the system identifies zero or more geographic regions in the set of geographic regions that the data point falls into. Finally, the system displays the set of geographic regions, wherein each polygon that defines a geographic region is marked to indicate a number of data points that fall into the polygon.
Abstract: A method and an apparatus for presenting a panoramic photo in a mobile terminal, and a mobile terminal. A trigger instruction that is used to instruct the mobile terminal to enter an immersive browsing mode is detected, where the immersive browsing mode is a browsing mode in which a panoramic photo moves as the mobile terminal rotates; and if the trigger instruction is detected, a rotation angle of the mobile terminal is detected and determined, and a panoramic photo that is presented in a normal mode in the mobile terminal is moved and presented according to the determined rotation angle. Using the present disclosure, complexity of panoramic photo browsing can be reduced, which makes it convenient for a user to browse a panoramic photo.
Abstract: Various embodiments of the disclosure pertain to an augmented or virtual reality interface for interacting with maps displayed from a virtual camera perspective on a mobile device. Instead of manipulating the position of the virtual camera using a touchscreen interface, some embodiments allow a spatial location of the mobile device to control the position of the virtual camera. For example, a user can tilt the mobile device to obtain different angles of the virtual camera. As another example, the user can move the mobile device vertically to change the height of the virtual camera, e.g., a higher altitude above the ground.
Type:
Grant
Filed:
May 16, 2018
Date of Patent:
May 5, 2020
Assignee:
Apple Inc.
Inventors:
Nathan L. Fillhardt, Adrian P. Lindberg, Vincent P. Arroyo, Justin M. Strawn
Abstract: Systems and methods are disclosed herein for a sensory compensation device including a position and orientation sensor arranged to generate position and orientation data based on one or more of detected velocity, angular rate, gravity, motion, position and orientation associated with the device. The device also optionally includes an optical sensor arranged to capture real-time images and generate real-time image data of an area adjacent to the device. The device includes a processor arranged to: i) optionally receive the real-time image data, ii) receive the position and orientation data and iii) generate compensated image data based on the real-time image data and the position and orientation data. Furthermore, the device includes a display arranged to display compensated images derived from the compensated image data where a portion of the compensated images includes the captured real-time images, if captured, with adjusted positions and orientations in relation to the captured real-time images.
Abstract: A correction coefficient calculation unit including a reference image output unit that generates a reference image, a profile receiving unit that receives image information and printing information, the image information indicating a color characteristic related to an image to be displayed on a display unit, the printing information indicating a color characteristic related to a printing device, and a reference image division unit that creates printing characteristic data that associates the reference image with the reference image that has undergone color conversion based on the printing information and the image information.
Abstract: The present invention relates to a joint automatic audio visual driven facial animation system that in some example embodiments includes a full scale state of the art Large Vocabulary Continuous Speech Recognition (LVCSR) with a strong language model for speech recognition and obtained phoneme alignment from the word lattice.
Abstract: A graphics processing method and apparatus includes determining locations of primitives in a 3-dimensional (3D) space from graphics data for the 3D space in a memory, determining sizes of the primitives, generating Morton codes comprising a piece of information indicating the locations of the primitives and a piece of information indicating the sizes of the primitives, classifying the primitives into bounding boxes using the piece of information indicating the sizes of the primitives, and generating the acceleration structure indicating an inclusion relationship between the bounding boxes.
Type:
Grant
Filed:
January 11, 2017
Date of Patent:
March 3, 2020
Assignees:
SAMSUNG ELECTRONICS CO., LTD., CZECH TECHNICAL UNIVERSITY FACULTY OF ELECTRICAL ENGINEERING
Inventors:
Vlastimil Havran, Marek Vinkler, Jiri Bittner, Wonjong Lee
Abstract: Systems, methods, and non-transitory computer-readable media can determine at least one salient point of interest in a frame of a content item based at least in part on a saliency prediction model, the saliency prediction model being trained to identify salient points of interest that appear in content items; determine a barrel projection representation for the frame; and apply a view-based projection to the barrel projection representation for the frame, wherein the view-based projection enhances a quality in which a region corresponding to the at least one salient point of interest is presented.
Type:
Grant
Filed:
April 13, 2018
Date of Patent:
March 3, 2020
Assignee:
Facebook, Inc.
Inventors:
Evgeny V. Kuzyakov, Renbin Peng, Chien-Nan Chen
Abstract: A content visualizing device and method that may adjust content based on a distance to an object so as to maintain a projection plane and prevent an overlap with the object in front is provided.
Abstract: Concepts and technologies for adaptive cloud offloading of mobile augmented reality are provided herein. In an embodiment, a method can include receiving, by an augmented reality system, an acquired image frame captured by an acquisition device. The acquired image frame can indicate a plurality of acquired image frame parameters. The method can include determining, by the augmented reality system, a plurality of augmented reality process instances. The method can include determining a plurality of local feature extraction time estimates based on the plurality of acquired image frame parameters, where a local feature extraction time estimate is created for each of the plurality of the augmented reality process instances. The method can include obtaining a network condition measurement, and generating a plurality of offload commands based on the network condition measurement and at least one of the plurality of local feature extraction time estimates.
Type:
Grant
Filed:
December 21, 2017
Date of Patent:
February 18, 2020
Assignee:
AT&T Intellectual Property I, L.P.
Inventors:
Bo Han, Vijay Gopalakrishnan, Shuai Hao
Abstract: A medical image diagnostic apparatus includes an acquisition circuitry configured to acquire data concerning the interior of an object, a reconstruction circuitry configured to reconstruct the first image concerning the object based on the acquired data, a display, and an enlargement reconstruction circuitry configured to enlarge/reconstruct the second image corresponding to the set enlargement reconstruction range based on a portion of the acquired data, with the second image being displayed in a display area of the display in place of a portion of the first image under the control of the display control circuitry.
Abstract: Embodiments provide for an apparatus including one or more processors having logic to enumerate a directed path through nodes of a directed acyclic graph, the logic to determine a key for a node and a path identifier for a directed path between nodes of the directed acyclic graph.
Abstract: Systems and methods are described for a media guidance application (e.g., implemented on a user device) that allows users to select any arbitrary position in a virtual reality environment from where to view the virtual reality content and changes a user's perspective based on the selected position.
Type:
Grant
Filed:
December 28, 2017
Date of Patent:
January 14, 2020
Assignee:
ROVI GUIDES, INC.
Inventors:
Jonathan A. Logan, Adam Bates, Hafiza Jameela, Jesse F. Patterson, Mark K. Berner, Eric Dorsey, David W. Chamberlin, Paul Stevens, Herbert A. Waterman
Abstract: A computer-implemented method and system are described for augmenting image data of an object in an image, the method comprising receiving captured image data from a camera, storing a plurality of augmentation image data defining a respective plurality of augmentation values to be applied to the captured image data, storing a plurality of augmentation representations, each representation identifying a respective portion of augmentation image data, selecting one of said augmentation image data and one of said augmentation representations based on at least one colourisation parameter, determining a portion of the augmentation image data to be applied based on the selected augmentation representation, augmenting the captured image data by applying said determined portion of the augmentation image data to the corresponding portion of the captured image data, and outputting the augmented captured image data.