Abstract: An image display method for a portable display device to be implemented by a processing module includes: controlling the portable display device to display an image in a default position within a display area; estimating a displacement distance of the portable display device during an (X+1)th unit time period, based at least on a number (N) of displacement distances of the portable display device respectively during (N) number of immediately previous unit time periods or on accelerations of the portable display device respectively associated with (X?1)th and (X)th unit time periods; and controlling the portable display device to shift the image with respect to the display area based on the displacement distance estimated thereby.
Abstract: Techniques for generated and presenting images of items within user selected context images are presented herein. In an example embodiment, an access module can be configured to receive a first environment image. A simulation module coupled to the access module may process the environment image to identify placement areas within the image, and an imaging module may merge an item image with the environment image and filter the merged image in an erosion area. In various embodiments, the items and environments may be selected by a user and presented to a user in real-time or near-real time as part of an online shopping experience. In further embodiments, the environments may be processed from images taken by a device of the user.
Type:
Grant
Filed:
December 22, 2014
Date of Patent:
May 29, 2018
Assignee:
eBay Inc.
Inventors:
Mihir Naware, Jatin Chhugani, Jonathan Su
Abstract: A method, including generating a three-dimensional (3D) map as a plurality of points illustrating a characteristic of a 3D heart chamber, the 3D heart chamber having an opening bounded by a perimeter. The method further includes transforming the perimeter into a closed two-dimensional (2D) figure having an interior. The plurality of points illustrating the characteristic are projected onto the interior of the 2D figure so as to generate a 2D map of the characteristic of the 3D heart chamber.
Abstract: The methods, systems, techniques, and components described herein allow interactions with virtual elements in a virtual environment, such as a Virtual Reality (VR) environment or Augmented Reality (AR) environment, to be modeled accurately. More particularly, the methods, systems, techniques, and components described herein allow a first virtual element to move within the virtual environment based on an anchor relationship between the first virtual element and a second virtual element. The anchor relationship may define an equilibrium position for the first virtual element. The equilibrium position may define a return position for the first virtual element with respect to the second virtual element. Responsive to the first virtual element being displaced from the equilibrium position, the virtual element may move towards the equilibrium position.
Abstract: The disclosed embodiments provide a system that configures a graphics-processing unit (GPU) in a computer system. During operation, the system predicts an incoming workload to the GPU. Next, the system identifies an operational floor for the GPU based on the incoming workload. Finally, the system uses the operational floor to configure the subsequent execution of the GPU, wherein the operational floor facilitates processing of the incoming workload by the GPU.
Abstract: A method of creating a computer-generated animation uses a graphical user interface including a two-dimensional array of cells. The array has a plurality of rows associated with visual characteristics of a computer-generated character and a plurality of columns associated with frames of the animation. The array includes a first cell associated with a first visual characteristic and a first frame. A first view of the array is displayed in which the first cell has a first width and includes a key frame indicator that indicates that a designated value is associated with the first visual characteristic for the first frame. A second view is displayed in which the first cell has a second width and includes an element value indicator. The second width is greater than the first width, and the element value indicator represents the value associated with the first visual characteristic.
Type:
Grant
Filed:
June 11, 2015
Date of Patent:
May 15, 2018
Assignee:
DreamWorks Animation L.L.C.
Inventors:
Michael Babcock, Fredrik Nilsson, Matthew Christopher Gong
Abstract: From a stored panorama moving image, panorama images are read and sequentially acquired every predetermined time for reproduction on a display device, each of the panorama images being a frame of the panorama moving image. A range to be displayed in a first display area is set in each of the acquired panorama images. A range to be displayed in a second display area is set in each of the acquired panorama images. The respective ranges of the acquired panorama images which are set to be displayed in the first display area are displayed in the first display area. The respective ranges of the acquired panorama images which are set to be displayed in the second display area are displayed in the second display area.
Abstract: An image generating apparatus and a display device for a layered display scheme based on a location of an eye of a user are provided, wherein the image generating apparatus may generate layer images for a three-dimensional (3D) image based on information related to pixels matched based on the location of the eye of the user.
Abstract: A display device may include: a plurality of layers configured to modulate pixel values in two directions; an obtaining unit configured to obtain matching information about matching of pixels belonging to differing layers; and/or a controller configured to control the plurality of layers based on the matching information. An image creating method may include: obtaining a target light field; obtaining a projection matrix corresponding to a viewpoint of a user; and/or creating a plurality of layer images for a plurality of layers configured to modulate pixel values in two directions based on the target light field and the projection matrix.
Type:
Grant
Filed:
August 7, 2014
Date of Patent:
April 10, 2018
Assignees:
Samsung Electronics Co., Ltd., Inha-Industry Partnership Institute
Inventors:
Ju Yong Park, Jae Hyeung Park, Na Young Jo, Dong-kyung Nam, Seok Lee
Abstract: An image processing apparatus includes an image information transmission unit, a color information acquisition unit, and a conversion relationship creation unit. The image information transmission unit transmits, to a display device, pieces of color-conversion image information representing images used for performing color conversion for the display device, in ascending order of lightness of the images in a predetermined color space. The color information acquisition unit acquires color information of each image that is displayed on the display device in accordance with a corresponding piece of color-conversion image information among the pieces of color-conversion image information that have been transmitted by the image information transmission unit. The conversion relationship creation unit creates, on the basis of the color information that has been acquired by the color information acquisition unit, a conversion relationship for a color of an image to be displayed on the display device.
Abstract: In one example, a method for processing video data includes receiving, by a sink device and from a source device, one or more graphical command tokens that are executable to render original video data; modifying, by the sink device, the graphical command tokens to generate modified graphical command tokens that are executable to render modified video data different from the original video data; and outputting, for presentation at a display operatively connected to the sink device, the modified video data.
Abstract: Systems, devices, and methods are provided for rendering images of hair using a statistical light scattering model for hair that approximates ground truth physical models. The model is significantly faster than other implementations of the Marschner hair model. The statistical light scattering model includes all the features of Marschner such as eccentricity for elliptical cross-sections, and extends them by adding azimuthal roughness control, consideration of natural fiber torsion, and full energy preserving. Adaptive Importance Sampling (AIS) is specialized to fit easily sampled distributions to bidirectional curve scattering density functions (BCSDFs) of the model.
Abstract: The present invention relates to a method and arrangement for developing a 3D model of an environment. The method comprises steps of providing a plurality of overlapping images of the environment, each image associated of navigation data, providing distance information, said LIDAR information comprising a distance value and navigation data from a plurality of distance measurements, and developing the 3D model based on the plurality of overlapping images and the distance information. The step of developing the 3D model comprises the steps of providing the 3D model based on the plurality of overlapping images; and updating the 3D model with the distance information using an iterative process.
Type:
Grant
Filed:
January 21, 2013
Date of Patent:
February 13, 2018
Assignee:
VRICON SYSTEMS AKTIEBOLAG
Inventors:
Folke Isaksson, Ingmar Andersson, Johan Bejeryd, Johan Borg, Per Carlbom, Leif Haglund
Abstract: A method, system and computer-program product for real-time virtual 3D reconstruction of a live scene in an animation system. The method comprises receiving 3D positional tracking data for a detected live scene by the processor, determining an event by analyzing the 3D positional tracking data by the processor, comprising steps of determining event characteristics from the 3D positional tracking data, receiving pre-defined event characteristics, determining an event probability by comparing the event characteristics to the pre-defined event characteristics, and selecting an event assigned to the event probability, determining a 3D animation data set from a plurality of 3D animation data sets assigned to the selected event and stored in the data base by the processor, and providing the 3D animation data set to the output device.
Abstract: A method for color correcting an input color image having input color values adapted for display on a reference display device having a plurality of input color primaries to account to provide reduced observer metemaric failure on a narrow-band display device. A metamerism correction transform is applied to the input color image to determine an output color image having output color values in an output color space appropriate for display on the narrow-band display device. The metamerism correction transform modifies colorimetry associated with the input colors to provide output color values such that an average observer metameric failure is reduced for a distribution of target observers.
Type:
Grant
Filed:
August 13, 2015
Date of Patent:
February 6, 2018
Assignee:
IMAX Theatres International Limited
Inventors:
Andrew F. Kurtz, Elena A. Fedorovskaya, Thomas O. Maier
Abstract: A method for the distribution of audio and visual media includes: receiving at least one visual content item to be displayed, and an audio content item and at least one trigger condition for each of the at least one visual content item; storing, in a database, the received at least one visual content item and corresponding audio content item and at least one trigger condition; identifying at least one display condition of a plurality of display conditions; identifying, in the database, a specific visual content item, wherein the at least one trigger condition corresponding to the specific visual content item is met based on the identified at least one display condition; displaying, by a light projection display device, the identified specific visual content item; and wirelessly transmitting the audio content item corresponding to the specific visual content item for audible emission by a mobile communication device.
Abstract: Disclosed is a method for displaying an object by an electronic device. The method for displaying an object includes overlapping a plurality of second objects obtained by changing an attribute of a first object in a direction corresponding to the location of a light source, displaying the plurality of overlapping second objects and displaying the first object on the plurality of overlapping second objects.
Type:
Grant
Filed:
February 1, 2016
Date of Patent:
January 30, 2018
Assignee:
Samsung Electronics Co., Ltd
Inventors:
Jae-Myoung Lee, Kyung-Dae Park, Jee-Yeun Wang, Ho-Young Lee
Abstract: A method for processing image data representing a three-dimensional volume, the data comprising image values for a three-dimensional grid of voxels, comprising: starting from a given voxel building up a vector path along a first dimension of the three dimensional volume, connecting a number of voxels of two-dimensional slices of the grid adjacent in the first dimension, the connected voxels representing a spatial neighborhood of the given voxel, including structurally related voxels of the three-dimensional grid; averaging the image values of the voxels of the vector path to obtain a first averaged value, assigned to the given voxel position; repeating the aforementioned steps for a number of voxels; repeating the aforementioned steps on the first averaged values, employing vector paths along a second dimension being different from the first dimension, obtaining second averaged values; and on the second averaged values repeating the aforementioned steps, employing vector paths along a third dimension being di
Abstract: A survey application generates a survey of components associated with a three-dimensional model of an object. The survey application receives video feeds, location information, and orientation information from wearable devices in proximity to the object. The three-dimensional model of the object is generated based on the video feeds, sensor data, location information, and orientation information received from the wearable devices. Analytics is performed from the video feeds to identify a manipulation on the object. The three-dimensional model of the object is updated based on the manipulation on the object. A dynamic status related to the manipulation on the object is generated with respect to reference data related the object. A survey of components associated with the three-dimensional model of the object is generated.