Abstract: Methods, systems, and computer-readable storage media providing banner representations in a computer system that provides a virtual environment, including: accessing an avatar record, where the avatar record indicates an avatar representation that includes data to provide a visual representation of an avatar; receiving a selection of a banner representation made by a user; accessing a representation record, wherein the representation record indicates a banner representation that includes data to provide a visual representation of a banner; associating the banner representation with the avatar representation; receiving avatar movement input that indicates movement of the avatar within the virtual environment; and generating visual data representing the movement of the avatar and banner in the virtual environment using the avatar representation and the banner representation, where the banner is placed in the virtual environment following the avatar as the avatar moves in the virtual environment.
November 3, 2021
Date of Patent:
June 27, 2023
Sony Group Corporation, Sony Corporation of America
Thomas Sachson, Eric Benjamin Fruchter, James Marcus
Abstract: A medical image processing apparatus comprises processing circuitry configured to: acquire volumetric image data for rendering; determine a plurality of positions of a viewpoint or positions of a light based on distance, or other spatial relationship, between the viewpoint or light and a surface or other selected feature in the image data; and render the volumetric data based on the plurality of viewpoint positions or light positions.
Abstract: A method for displaying an advertisement picture includes obtaining, by a terminal, location information of a first key information area in a first advertisement picture from an advertisement server, obtaining, by the terminal, the first advertisement picture, cropping, by the terminal, the first advertisement picture based on the location information, and displaying, by the terminal, a second advertisement picture in an advertisement display area of a display, where the second advertisement picture includes a second key information area, and the second advertisement picture is obtained after the first advertisement picture is cropped, or the second advertisement picture is a picture obtained by scaling the cropped first advertisement picture.
Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that utilize simultaneous, multi-mesh deformation to implement edge aware transformations of digital images. In particular, in one or more embodiments, the disclosed systems generates a transformation handle that targets an edge portrayed in a digital image. In some cases, the disclosed systems provide the transformation handle for display over the digital image. Additionally, in one or more embodiments, the disclosed systems generate vectors splines and meshes for the edge and one or more influenced regions adjacent to the edge. In response to detecting a user interaction with the transformation handle, the disclosed systems can modify the edge and the at least one influenced region by modifying the corresponding vector splines and meshes.
Abstract: Virtual reality (VR), augmented reality (AR), and/or mixed reality (MR) systems may enable one or more users to connect two or more connectable objects together. These connectable objects may be real objects from the user's environment, virtual objects, or a combination thereof. A preview system may be included as a part of the VR, AR, and/or MR systems that provide a preview of the connection between the connectable objects prior to the user(s) connecting the connectable objects. The preview may include a representation of the connectable objects in a connected state along with an indication of whether the connected state is valid or invalid. The preview system may continuously physically model the connectable objects while simultaneously displaying a preview of the connection process to the user of the VR, AR, or MR system.
May 5, 2021
Date of Patent:
June 6, 2023
MAGIC LEAP, INC.
Edmund Graves Brown, IV, Javier Antonio Busto, Jeffrey A. Scott, Jeremy Vanhoozer
Abstract: A computer system is used to host a virtual reality universe process in which multiple avatars are independently controlled in response to client input. The host provides coordinated motion information for defining coordinated movement between designated portions of multiple avatars, and an application responsive to detect conditions triggering a coordinated movement sequence between two or more avatars. During coordinated movement, user commands for controlling avatar movement may be in part used normally and in part ignored or otherwise processed to cause the involved avatars to respond in part to respective client input and in part to predefined coordinated movement information. Thus, users may be assisted with executing coordinated movement between multiple avatars.
Abstract: A display device including a housing; a display unit accommodated in the housing; and a control unit configured to cause the display unit to operate in a speaker mode to play a song while the display unit is fully inserted in the housing, receive an image display command, according to the image display command, withdraw the display unit and display a first content corresponding to the image display command in a first region of the display unit, and display a second content of an audio controller for controlling the speaker mode in a second region of the display unit.
Abstract: A system may receive captured content identifying a first face of a user of a user device and media content identifying a second face of a person. The system may process the captured content and the media content, using a first machine learning model, to determine embeddings for the captured content and to identify landmarks in the media content. The system may combine the captured content with the media content, based on the landmarks and the embeddings, to generate combined content and to segment a combined face of the combined content, to replace the second face of the media content with the combined face, and to generate new media content that includes the combined face. The system may process the new media content, with a second machine learning model, to blend the combined face in the new media content and to generate final media content.
Abstract: There is provided an image processing device including: a data storage unit storing feature data indicating a feature of appearance of one or more physical objects; an environment map building unit for building an environment map based on an input image obtained by imaging a real space and the feature data, the environment map representing a position of a physical object present in the real space; a control unit for acquiring procedure data for a set of procedures of operation to be performed in the real space, the procedure data defining a correspondence between a direction for each procedure and position information designating a position at which the direction is to be displayed; and a superimposing unit for generating an output image by superimposing the direction for each procedure at a position in the input image determined based on the environment map and the position information, using the procedure data.
Abstract: Curve antialiasing based on curve-pixel intersection is leveraged in a digital medium environment. For instance, to apply antialiasing according to techniques described herein, curves of a visual object are mapped from an original pixel space to a virtual pixel space. Virtual pixels of the virtual pixel space that are intersected by the mapped curves are identified and aggregated as intersected virtual pixels. The intersected virtual pixels are then mapped back into the original pixel space to identify which intersected virtual pixels positionally coincide with respective original pixels of the original pixel space. Intersected virtual pixels are mapped to original pixels to generate pixel coverage for original pixels. The generated pixel coverage values for original pixels are applied to render antialiased curves as part of an antialiased version of the original visual object.
Abstract: A hardware-based traversal coprocessor provides acceleration of tree traversal operations searching for intersections between primitives represented in a tree data structure and a ray. The primitives may include opaque and alpha triangles used in generating a virtual scene. The hardware-based traversal coprocessor is configured to determine primitives intersected by the ray, and return intersection information to a streaming multiprocessor for further processing. The hardware-based traversal coprocessor is configured to omit reporting of one or more primitives the ray is determined to intersect. The omitted primitives include primitives which are provably capable of being omitted without a functional impact on visualizing the virtual scene.
November 5, 2021
Date of Patent:
May 9, 2023
Greg Muthler, Tero Karras, Samuli Laine, William Parsons Newhall, Jr., Ronald Charles Babich, Jr., John Burgess, Ignacio Llamas
Abstract: An encoder is configured to compress point cloud geometry information using an octree/predictive tree combination geometric compression technique that embeds predictive trees in leaf nodes of an octree instead of encoding additional octree occupancy symbols for the leaf nodes. Alternatively an encoder may be configured to embed octrees in leaf nodes of a predictive tree structure. Similarly a decoder is configured to generate a reconstructed three-dimensional geometry from a bit stream including combined octree and predictive tree encoding information.
Abstract: Systems and methods are disclosed for automatically aligning drawings. One method comprises receiving a source drawing and a target drawing, determining main axes of the source and target drawings respectively, and aligning the main axis of the source drawing to the main axis of the target drawing. A plurality of source feature point vectors and target feature point vectors may be generated from the source and target drawings, whose main axes have been aligned. A predetermined number of matching FPV pairs may then be determined across the source and target drawings, and the source drawing may be aligned with the target drawings based on the matching FPV pairs.
Abstract: A second learning data acquisition section acquires an input image. A wide angle-of-view image generation section generates, in response to an input of the input image, a generated wide angle-of-view image that is an image having a wider angle of view than the input image. The second learning data acquisition section acquires a comparative wide angle-of-view image that is an image to be compared with the generated wide angle-of-view image. A second learning section performs learning for the wide angle-of-view image generation section by, on the basis of a comparison result between the generated wide angle-of-view image and the comparative wide angle-of-view image, updating parameter values of the wide angle-of-view image generation section such that, according to the luminance levels of pixels in the comparative wide angle-of-view image or the luminance levels of pixels in the generated wide angle-of-view image, update amounts of the parameter values concerning the pixels are increased.
Abstract: An imaging system for superimposing at least two images, and a related method for superimposing at least two images, which images are obtained from separate or independent image sources. A single mapping based view of a user selected geographic area onto which multiple image data sets from separate computer systems are displayed whilst simultaneously isolating separate input data channels. At least one first image data set and/or a position code (generated by a position code generator) is transferred to a position code extractor via at least one secure transmission channel. The secure channel ensures isolation of input data from the separate input data channels of different computer systems. The first image is combined with a second image to provide a combined single image view of the selected geographic area. Transmission of an image data set, with associated position code, negates the need to pass data sets between separate computer systems.
Abstract: An encoder is configured to compress point cloud geometry information using an octree geometric compression technique that utilizes node groups. Nodes within a node group are scanned according to a breadth first scan order. Sequential node groups to evaluate may be selected according to a breadth first scan order or a depth first scan order based on whether or not the breadth first scan order or the depth first scan order is indicated in a flag in a preceding node group evaluated. In some embodiments, evaluation orders for node groups may be implicit without being signaled via flags. A decoder is configured to reconstruct a point cloud based on a bit stream encoded by the encoder.
Abstract: A method of generating a path of an object through a virtual environment is provided, the method comprising: receiving image data, at a first instance of time, from a plurality of image capture devices arranged in a physical environment; receiving image data, at an at least one second instance of time after the first instance of time, from a plurality of image capture devices arranged in the physical environment; detecting a location of a plurality of points associated with an object within the image data from each image capture device at the first instance of time and the at least one second instance of time; projecting the location of the plurality of points associated with the object within the image data from each image capture device at the first instance of time and the at least one second instance of time into a virtual environment to generate a location of the plurality of points associated with the object in the virtual environment at each instance of time; and generating a path of the object through
Abstract: An encoder is configured to compress point cloud geometry information using an octree geometric compression technique that utilizes slices corresponding in size to data transmission units. In some embodiments, a subsequent slice may be set to use a re-set entropy context or may be set to use an entropy context saved for a preceding slice. In some embodiments, an entropy context for the preceding slice may be for a slice other than the immediately preceding slice of the subsequent slice being evaluated, such that if the immediately preceding slice is lost in transmission (or if the immediately preceding slice and the subsequent slice are being evaluated in parallel) the subsequent slice's entropy context can still be determined without depending on the immediately preceding slice. A decoder is configured to reconstruct a point cloud based on a bit stream encoded by the encoder.
Abstract: A method including rendering graphics for an application using graphics processing units (GPUs). Responsibility for rendering of geometry is divided between GPUs based on screen regions, each GPU having a corresponding division of the responsibility which is known. First pieces of geometry are rendered at the GPUs during a rendering phase of a previous image frame. Statistics are generated for the rendering of the previous image frame. Second pieces of geometry of a current image frame are assigned based on the statistics to the GPUs for geometry testing. Geometry testing at a current image frame on the second pieces of geometry is performed to generate information regarding each piece of geometry and its relation to each screen region, the geometry testing performed at each of the GPUs based on the assigning. The information generated for the second pieces of geometry is used when rendering the geometry at the GPUs.
July 29, 2021
Date of Patent:
March 14, 2023
Sony Interactive Entertainment Inc.
Mark E. Cerny, Florian Strauss, Tobias Berghoff
Abstract: A view of a virtual environment that includes a body of fluid is rendered by a method that includes dividing the body of fluid into a plurality of tiles having consistent size and shape and generating a distribution of waves for the plurality of tiles. A reactive region overlaying at least some of the plurality of tiles is defined and an object within the reactive region is identified. The method further includes determining an influence of the object on fluid within the reactive region, simulating motion of the fluid in the reactive region using the determined influence of the object, and rendering a frame of a video sequence including the view of the virtual environment, the view including a visual representation of at least a portion of the body of fluid.