Abstract: A method and apparatus for rendering a computer-generated image using a stencil buffer is described. The method divides an arbitrary closed polygonal contour into first and higher level primitives, where first level primitives correspond to contiguous vertices in the arbitrary closed polygonal contour and higher level primitives correspond to the end vertices of consecutive primitives of the immediately preceding primitive level. The method reduces the level of overdraw when rendering the arbitrary polygonal contour using a stencil buffer compared to other image space methods. A method of producing the primitives in an interleaved order, with second and higher level primitives being produced before the final first level primitives of the contour, is described which improves cache hit rate by reusing more vertices between primitives as they are produced.
Abstract: Example apparatus and methods for generating context-aware artificial intelligence characters are disclosed. An example apparatus to animate an artificial intelligence character includes a data tagger to tag data in a media data stream to generate a plurality of data files of tagged data, the data files corresponding to different time periods in a storyline, the tagged data associated with a first character in the media data stream, the artificial intelligence character to portray the first character. The example apparatus includes a trainer to generate a response model of the first character based on the data file corresponding to a current data time period and one or more data files corresponding to one or more earlier time periods of the storyline and a response generator to apply the response model based on a stimulus input to animate the artificial intelligence character.
Abstract: An image editing program can include animation brushes. An animation brush enables a content creator to draw as with any other digital brush. The animation brush will automatically generate elements, such as hair or raindrops. Each element can be drawn using a set of pixels that have a particular shape, location, and/or color, as determined by parameters associated with the animation brush. The image editing program can further animate the elements by determining, for each frame of an animation sequence, updated shape, location, and/or color values for the pixels of the elements. The image editing program can then redraw the elements. Redrawing of the elements can thus produce an animation.
Abstract: Methods and systems for supporting parallel processing utilizing Central Processing Unit(s) (CPU(s)) and at least one Graphics Processing Unit (GPU) device to provide high scale processing of content streams. An exemplary method embodiment including the steps of: receiving at a CPU multiple data units corresponding to a first frame time for each of first through Nth content streams; sequentially processing by the CPU data units corresponding to different content streams and the first frame time; operating a set of cores of a GPU, in parallel, to perform processing on a set of data units, processing including operating each core of the set of cores to perform an operation on a data unit corresponding to a single one of the first plurality of content streams, each core in the set of cores processing a data unit of a different content stream, said processing generating a set of generated data units.
Type:
Grant
Filed:
March 1, 2018
Date of Patent:
June 9, 2020
Assignee:
Ribbon Communications Operating Company, Inc.
Abstract: An object recognition apparatus includes: a memory; and a processor coupled to the memory and configured to execute an acquisition process that includes acquiring RGB-depth (D) image data on a target object in which an object is stored, the RGB-D image data being acquired by an RGB-D camera; execute a presumption process that includes presuming a frontmost plane of the target object from the acquired RGB-D image data; execute a sorting process that includes sorting features of the target object at a position specified by the presumed frontmost plane from among features extracted based on the RGB-D image data; and execute a computing process that includes computing a position and an orientation of the target object by performing matching between the RGB-D image data and a template of the target object using the sorted features.
Abstract: A method to visualize and correct alignment errors between paired 2D and 3D data sets is described. In a representative embodiment, a display interface used for dental implant planning includes one or more display areas that enable the operator to visualize alignment errors between the paired 2D and 3D data sets. A first display area renders 3D cone beam data. A second display area renders one or more (and preferably three (3) mutually orthogonal views) slices of the cone beam data. A third display area displays a view of a 2D scanned surface map (obtained from an intra-oral scan, or the scan of a model). According to a first aspect, the view of the 2D scanned surface map in the third display area is “textured” by coloring the 2D surface model based on the intensity of each 3D pixel (or “voxel”) that it intersects.
Abstract: The disclosure relates to systems and processes for generating verified wireframes corresponding to at least part of a structure or element of interest can be generated from 2D images, 3D representations (e.g., a point cloud), or a combination thereof. The wireframe can include one or more features that correspond to a structural aspect of the structure or element of interest. The verification can comprise projecting or overlaying the generated wireframe over selected 2D images and/or a point cloud that incorporates the one or more features. The wireframe can be adjusted by a user and/or a computer to align the 2D images and/or 3D representations thereto, thereby generating a verified wireframe including at least a portion of the structure or element of interest. The verified wireframes can be used to generate wireframe models, measurement information, reports, construction estimates or the like.
Type:
Grant
Filed:
February 1, 2019
Date of Patent:
May 19, 2020
Assignee:
Pointivo, Inc.
Inventors:
Habib Fathi, Daniel L. Ciprari, William Wilkins
Abstract: Systems and methods are provided for presenting visual media on a structure having a plurality of unordered light sources, e.g., fiber optic light sources, light emitting diodes (LEDs), etc. Visual media can be created based on a computer model of the structure. Images of the structure can be analyzed to determine the location of each of the light sources. A lookup table can be generated based on the image analysis, and used to correlate pixels of the visual media to one or more of the actual light sources. A visual media artist or designer need not have prior knowledge of the order/layout of the light sources on the structure in order to create visual media to be presented thereon.
Type:
Grant
Filed:
June 29, 2018
Date of Patent:
May 12, 2020
Assignee:
DISNEY ENTERPRISES, INC.
Inventors:
Steven M. Chapman, Joseph Popp, Mehul Patel
Abstract: A method, a computer-readable medium, and an apparatus are provided. The apparatus may be configured to receive information indicative of a fovea region. The apparatus may be configured to identify, based on the information indicative of the fovea region, high priority bins and low priority bins. The apparatus may be configured to determine a rendering time allotment for the frame. The apparatus may be configured to determine that the rendering time allotment for the frame will be exceeded, based on an amount of time used to render the high priority bins and the low priority bins. The apparatus may be configured to render, based on the determination that the rendering time allotment for the frame will be exceeded, at least one of the low priority bins at a first quality instead of a second quality.
Type:
Grant
Filed:
September 13, 2018
Date of Patent:
May 12, 2020
Assignee:
QUALCOMM Incorporated
Inventors:
Samuel Benjamin Holmes, Tate Hornbeck, Robert Vanreenen
Abstract: An embodiment of a semiconductor package apparatus may include technology to identify a region of interest portion of a first image, and render the region of interest portion with super-resolution. Other embodiments are disclosed and claimed.
Abstract: A method of overlaying a picture of a real scene with a virtual image comprises a step of reading image data, wherein the image data represent a picture of the real scene captured by an environment sensor of a mobile device, a step of determining marker data from the image data, wherein the marker data represent a picture and a positioning of a marker arranged in the real scene, a step of reading virtual image data, wherein the virtual image data represent image data selected from a plurality of virtual image data using the marker data, wherein the virtual image data comprise a representation instruction for representing the virtual image and a positioning instruction for positioning the virtual image, a step of determining object data from the image data, wherein the object data represent a picture and a positioning of object portion of an object arranged in the environment of the marker in the real scene and a step of ascertaining a positioning rule for representing the virtual image with reference to the p
Abstract: Methods, devices, and systems for determining a job file for a three-dimensional printing machine based on part-to-build data. Embodiments include determining the part-to-build data based on: determining part data from a received computer-aided design (CAD) file, generating orientation data, generating support data, generating feature data, and generating slicing data. In some embodiments, determining the job file may be further based on generating nesting matrix associated with the part-to-build data.
Abstract: Disclosed are embodiments for the generation of point clouds representing a region of space. The region may comprise a plurality of objects, which may comprise cubes, voxels, and/or the like. A plurality of points are calculated and distributed among the objects within the region. Generation of such point clouds may be useful in rendering representations of the region of space, and/or respective object(s) therein, on human-machine interface devices, such as computer displays and/or the like.
Abstract: For three-dimensional rendering, a machine-learnt model is trained to generate representation vectors for rendered images formed with different rendering parameter settings. The distances between representation vectors of the images to a reference are used to select the rendered image and corresponding rendering parameters that provides a consistency with the reference. In an additional or different embodiment, optimized pseudo-random sequences are used for physically-based rendering. The random number generator seed is selected to improve the convergence speed of the renderer and to provide higher quality images, such as providing images more rapidly for training compared to using non-optimized seed selection.
Abstract: A method and apparatus for rendering a computer-generated image using a stencil buffer is described. The method divides an arbitrary closed polygonal contour into first and higher level primitives, where first level primitives correspond to contiguous vertices in the arbitrary closed polygonal contour and higher level primitives correspond to the end vertices of consecutive primitives of the immediately preceding primitive level. The method reduces the level of overdraw when rendering the arbitrary polygonal contour using a stencil buffer compared to other image space methods. A method of producing the primitives in an interleaved order, with second and higher level primitives being produced before the final first level primitives of the contour, is described which improves cache hit rate by reusing more vertices between primitives as they are produced.
Abstract: The embodiments of the present invention provide an information-duplication system, an information duplication method, an electronic device, and a machine-readable storage medium. First, a selection region is determined according to a duplication operation of a user on the display content of an electronic device. Then, the text information corresponding to the selection region is converted into pictures. Finally, the pictures converted from the text information in the selection region are merged with the pictures in the selection region, and the picture generated after merging is then shared as a duplication content. The embodiments of the present invention can realize the simultaneous presence of text information and pictures in the duplication content.
Type:
Grant
Filed:
January 19, 2017
Date of Patent:
March 24, 2020
Assignee:
GUANGZHOU ALIBABA LITERATURE INFORMATION TECHNOLOGY CO., LTD.
Abstract: Systems and methods for generating and facilitating access to a personalized augmented rendering of a user to be presented in an augmented reality environment are discussed herein. The augmented rendering of a user may be personalized by the user to comprise a desired representation of the user in an augmented reality environment. When a second user is detected within the field of view of a first user, the second user may be identified and virtual content (e.g., an augmented rendering) for the second user may be obtained. The virtual content obtained may differ based on one or more subscriptions for the first user and/or permissions associated with the virtual content of the second user. The virtual content obtained may be rendered and appear superimposed over or in conjunction with a view of the second in the augmented reality environment.
Abstract: Systems and methods to generate an environmental record for an interactive space are presented herein. An environmental record may represent a set of local environments and may define archival location compositions for the local environments. An archival location composition for a local environment may define aspects of the local environment associated with one or more objects and/or surfaces previously determined to be present in the local environment. A headset worn by a user in the local environment may generate a current location composition based on output signals from sensors included in the headset. The archival and current location compositions may be compared to determine updates for the environmental record.
Abstract: There are provided systems and methods for rendering of an animated avatar. An embodiment of the method includes: determining a first rendering time of a first clip as approximately equivalent to a predetermined acceptable rendering latency, a first playing time of the first clip determined as approximately the first rendering time multiplied by a multiplicative factor; rendering the first clip; determining a subsequent rendering time for each of one or more subsequent clips, each subsequent rendering time is determined to be approximately equivalent to the predetermined acceptable rendering latency plus the total playing time of the preceding clips, each subsequent playing time is determined to be approximately the rendering time of the respective subsequent clip multiplied by the multiplicative factor; and rendering the one or more subsequent clips.
Abstract: Techniques for high-fidelity three-dimensional (3D) reconstruction of a dynamic scene as a set of voxels are provided. One technique includes: receiving, by a processor, image data from each of two or more spatially-separated sensors observing the scene from a corresponding two or more vantage points; generating, by the processor, the set of voxels from the image data on a frame-by-frame basis; reconstructing, by the processor, surfaces from the set of voxels to generate low-fidelity mesh data; identifying, by the processor, performers in the scene from the image data; obtaining, by the processor, high-fidelity mesh data corresponding to the identified performers; and merging, by the processor, the low-fidelity mesh data with the high-fidelity mesh data to generate high-fidelity 3D output. The identifying of the performers includes: segmenting, by the processor, the image data into objects; and classifying, by the processor, those of the objects representing the performers.
Type:
Grant
Filed:
November 4, 2016
Date of Patent:
March 3, 2020
Assignee:
INTEL Corporation
Inventors:
Sridhar Uyyala, Ignacio J. Alvarez, Bradley A. Jackson, Deepak S. Vembar