Abstract: There is provided an image processing apparatus including an image reception unit configured to receive an image with markers including a sub image and marker pixels each indicating, using a pixel value, a combining ratio of a main image and the sub image that is combined with the main image, a combining ratio acquisition unit configured to acquire the combining ratio indicated by a pixel value of the marker pixel in the image with markers, and a combining unit configured to combine the main image and the sub image based on the acquired combining ratio.
Abstract: A method of generating an acceleration structure for ray tracing, the method including, using a processor, dividing a three-dimensional (3D) space including primitives into bounding boxes, obtaining position information of where the bounding boxes overlapping each other, and generating an acceleration structure representing the position information and an inclusion relation between the bounding boxes. Also disclosed is a related method of traversing an acceleration structure.
Type:
Grant
Filed:
June 29, 2016
Date of Patent:
July 10, 2018
Assignee:
SAMSUNG ELECTRONICS CO., LTD.
Inventors:
Jeongsoo Park, Youngsam Shin, Sangoak Woo
Abstract: The inventive method involves receiving as input a representation of an ordered set of two dimensional images. The ordered set of two dimensional images is analyzed to determine at least one first view of an object in at least two dimensions and at least one motion vector. The next step is analyzing the combination of the first view of the object in at least two dimensions, the motion vector, and the ordered set of two dimensional images to determine at least a second view of the object; generating a three dimensional representation of the ordered set of two dimensional images on the basis of at least the first view of the object and the second view of the object. Finally, the method involves providing as output an indicia of the three dimensional representation.
Abstract: Methods, devices, systems, and computer readable media to improve the operation of window-based operating systems are disclosed. In general, techniques are disclosed for rendering areas on a display in which two or more shadows overlap. More particularly, two or more shadow regions (based on the arrangement of overlapping windows/shadows) are identified and merged in a top-down process so that no region's shadow is painted or rendered more than once. A shadowbuffer (analogous to a system's framebuffer) may be used to retain windows' alpha information separately from the corresponding windows' shadow intensity information.
Type:
Grant
Filed:
May 5, 2017
Date of Patent:
July 3, 2018
Assignee:
Apple Inc.
Inventors:
James J. Shearer, Christopher P. Wright, Ryan N. Armstrong, Chad E. Jones
Abstract: A system, method, and computer program product are provided for computing indirect lighting in a cloud network. In operation, one or more scenes for rendering are identified. Further, indirect lighting associated with the one or more scenes is identified. Additionally, computation associated with the indirect lighting is performed in a cloud network utilizing at least one of a voxel-based algorithm, a photon-based algorithm, or an irradiance-map-based algorithm.
Type:
Grant
Filed:
October 18, 2013
Date of Patent:
June 26, 2018
Assignee:
NVIDIA Corporation
Inventors:
Morgan McGuire, Cyril Jean-Francois Crassin, David Patrick Luebke, Michael Thomas Mara, Brent L. Oster, Peter Schuyler Shirley, Peter-Pike J. Sloan, Christopher Ryan Wyman
Abstract: An augmented reality environment allows interaction between virtual and real objects. By monitoring user actions with the augmented reality environment various functions are provided to users. Users may buy or sell items with a gesture, check inventory of objects in the augmented reality environment, view advertisements, and so forth.
Type:
Grant
Filed:
June 10, 2011
Date of Patent:
June 26, 2018
Assignee:
Amazon Technologies, Inc.
Inventors:
William Spencer Worley, III, Edward Dietz Crump, Robert A. Yuan, Christopher Coley, Colter E. Cederlof
Abstract: A method, apparatus, and computer program product are provided for adaptive zoom control for zooming in on a venue beyond the maximum zoom level available in a digital map. An apparatus may be provided including at least one processor and at least one non-transitory memory including computer program code instructions. The computer program code instructions may be configured to, when executed, cause the apparatus to at least: provide for presentation of a map of a region including a venue; receive an input corresponding to a zoom-in action to view an enlarged portion of the region, where the enlarged portion of the region includes the venue; and in response to receiving the input corresponding to a zoom-in action to view the enlarged portion of the region, transition from the presentation of the map of the region to a presentation of a venue object corresponding to the venue.
Abstract: An augmented reality environment allows interaction between virtual and real objects. By monitoring user actions with the augmented reality environment various functions are provided to users. Users may buy or sell items with a gesture, check inventory of objects in the augmented reality environment, view advertisements, and so forth.
Type:
Grant
Filed:
June 10, 2011
Date of Patent:
June 12, 2018
Assignee:
Amazon Technologies, Inc.
Inventors:
William Spencer Worley, III, Edward Dietz Crump, Colter E. Cederlof, Christopher Coley, Robert A. Yuan
Abstract: In a compositing window system, as a respective version of the window for an application is written into a window buffer, a corresponding set of per tile signatures indicative of the content of each respective tile in the window buffer is generated and stored. When an updated version of the window is stored into a window buffer, the set of signature values for the updated version is compared to the set of signature values for the previous version in the window buffer to determine which tiles' content has changed. The set of tiles found to have changed is used to generate a set of regions for a window compositor to write to a window in a display frame buffer to update the window in the display frame buffer to display the new version of the window.
Type:
Grant
Filed:
March 30, 2012
Date of Patent:
June 12, 2018
Assignee:
ARM Limited
Inventors:
Tom Cooksey, Jon Erik Oterhals, Jørn Nystad, Lars Ericsson, Eivind Liland, Daren Croxford
Abstract: A head-mounted display (HMD) for an augmented reality system allows a user to view an augmented scene comprising a real-world portion of a live scene combined with virtual images overlaying an infrared (IR) portion of the live scene. The HMD includes a head-wearable frame, a lens defining a user field of view (FOV), and a camera. The lens permits the user to view a live scene corresponding to the user FOV, and the camera is configured to capture a captured image of the live scene containing data indicative of an infrared (IR) portion of the live scene. The IR portion of the live scene includes reflected IR light above a predetermined threshold, such as an IR reflective background surface in a simulation environment. Based on the IR portion, a virtual image is displayed on an interior surface of the lens so that the virtual image overlays the IR portion.
Abstract: This disclosure describes an information processing apparatus including a processor configured to acquire first image data, detect reference image data of a particular object from the first image data, store first time information indicating a first time when the reference image data is detected from the first image data or when the first image data is captured, acquire second image data, generate, when another reference image data of another particular object is detected from the second image data, second time information indicating a second time when the another reference image data is detected from the second image data or when the second image data is captured, generate movement information based on a difference between the first time information and the second time information, and determine whether a work is implemented in a place where the work has to be implemented.
Abstract: A method for developing a subterranean field includes: receiving a representation of a rock surface in the subterranean field, the representation having a boundary; defining a set of grid planes over the representation; defining a plurality of core nodes at intersections of the grid planes that are within the boundary; defining core lines to connect each core node with each adjacent core node along the set of grid planes; defining a plurality of plane nodes on the grid planes where each grid plane intersects the boundary; defining plane lines to connect each plane node with each adjacent plane node along the grid planes; defining outlier nodes at each vertex of the boundary; and defining boundary lines connecting each of the plane nodes and each of the outlier nodes along the boundary. The method further includes operating equipment using at least one definition in order to develop the subterranean field.
Abstract: A rendering platform for providing live augmented reality content to a user device is disclosed. The rendering platform receives augmented reality content associated with an event via a multicast data channel. A coverage area of the multicast data channel covers a portion of a venue associated with the event. The rendering platform receives a request to present an augmented reality display of the event at the user device within the coverage area. Further, the augmented reality content in the augmented reality display is presented based on the request.
Abstract: Techniques for operating electronic paper displays of respective electronic devices are described. One set of techniques described below enhances user experience by utilizing multiple different waveform and/or display-update modes when rendering content on these displays. Another set of techniques are able to render lines on electronic paper displays having variable and arbitrary darkness, despite the restricted color depth inherent in these displays. In addition, this disclosure describes techniques for utilizing supersampling to select which shades to render on an electronic paper display of an electronic device. In still other implementations, the techniques described herein allocate a predefined frame rate of an electronic paper display between multiple different application components requesting to update the display, resulting smooth animation and relatively high-frame updates.
Type:
Grant
Filed:
September 28, 2011
Date of Patent:
June 5, 2018
Assignee:
Amazon Technologies, Inc.
Inventors:
Julien G. Beguin, Bradley J. Bozarth, Ilya D. Rosenberg, Jay Michael Puckett
Abstract: Conflicts between the database-building and traversal phases are resolved by allowing the database bin size to be different from the display bin size. The database bin size is some multiple of the bin display bin size, and when there are multiple display bins in a database bin, each database bin is traversed multiple times for display, and the rasterizer discards primitives outside of the current display bin. This allows a trade off between memory bandwidth consumed for database building and bandwidth consumed for display, particularly when the display traversal is done multiple of times.
Abstract: An image generating apparatus includes: a diagnosis image generating section that generates, as a diagnosis image for every first time interval, at least one of a moving image in which a predetermined part of a human body or an animal is photographed and temporally continuous images based on the moving image; an image target setting section that acquires, for the diagnosis image, a first image at a predetermined time and a second image for every second time interval longer than the first time interval from the predetermined time; a pixel color converting portion that converts, of pixels of the first image and the second image, colors of pixels satisfying a predetermined condition to be distinguishable; and a display image generating section that generates an image for display using the first image and the second image whose colors of the pixels have been converted by the pixel color converting portion.
Abstract: Systems, apparatus, articles, and methods are described below including operations for video tone mapping to convert High Dynamic Range (HDR) content to Standard Dynamic Range (SDR) content.
Type:
Grant
Filed:
December 26, 2015
Date of Patent:
May 29, 2018
Assignee:
Intel Corporation
Inventors:
Hyeong-Seok Victor Ha, Yi-Jen Chiu, Yi-Chu Wang
Abstract: A method for presenting text information on a head-mounted display is provided, comprising: rendering a view of a virtual environment to the head-mounted display; tracking an orientation of the head-mounted display; tracking a gaze of a user of the head-mounted display; processing the gaze of the user and the orientation of the head-mounted display, to identify a gaze target in the virtual environment towards which the gaze of the user is directed; receiving text information for rendering on the head-mounted display; presenting the text information in the virtual environment in a vicinity of the gaze target.
Abstract: A computer-implemented method for rendering views to an output device and controlling a vehicle. The method includes determining a maneuver path for the vehicle within a spatial environment around the vehicle based on vehicle data from one or more vehicle systems of the vehicle. The method includes updating a view based on the spatial environment and the maneuver path, by augmenting one or more components of a model to provide a representation of the maneuver path virtually in the view as an available maneuver path or an unavailable maneuver path. The view is rendered to an output device and a vehicle maneuver request is generated based on the maneuver path. Further, the one or more vehicle systems are controlled to execute the vehicle maneuver request.
Type:
Grant
Filed:
March 10, 2017
Date of Patent:
May 22, 2018
Assignee:
Honda Motor Co., Ltd.
Inventors:
Arthur Alaniz, Joseph Whinnery, Robert Wesley Murrish, Michael Eamonn Gleeson-May
Abstract: A system, method, and device for creating an environment and sharing an experience using a plurality of mobile devices having a conventional camera and a depth camera employed near a point of interest. In one form, random crowdsourced images, depth information, and associated metadata are captured near said point of interest. Preferably, the images include depth camera information. A wireless network communicates with the mobile devices to accept the images, depth information, and metadata to build and store a 3D model of the point of interest. Users connect to this experience platform to view the 3D model from a user selected location and orientation and to participate in experiences with, for example, a social network.