Abstract: The disclosed method encompasses pre-registering an anatomical body part with a coordinate system used by an augmented reality device (such as augmented reality glasses) for outputting (e.g. displaying or projecting) augmentation information. An example of the augmentation information is the position (in the real image captured by the augmented reality device) of a fine registration area on the anatomical body part which a user is supposed to identify for fine registration of the anatomical body part with a tracking coordinate system used by a medical position tracking system. The disclosed method is usable in a medical environment such as for surgery or radiotherapy.
Type:
Grant
Filed:
March 10, 2017
Date of Patent:
October 5, 2021
Assignee:
BRAINLAB AG
Inventors:
Nils Frielinghaus, Christoffer Hamilton
Abstract: An interactive illustration system, interactive animation system, and methods of use are presented. The present disclosure reveals an illustration and/or narrative and/or geometric figure in subsequent layers so that a user can be surprised by the coloring and engaged by the stimulus of the surprise created by the experience. The present disclosure relates to an interactive coloring system and animation system having gamification elements and interactive graphical features. More specifically, and without limitation, the present disclosure relates to interactivity and gamification in the coloring game genre.
Abstract: A three-dimensional model creator creates individual models indicative of individual shapes of at least two objects from an integrated model created based on data obtained by imaging or measuring the objects together. The three-dimensional model creator creates a plurality of division models by dividing the integrated model with extension planes obtained by extending surfaces that define the integrated model, identifies two-dimensional regions in which the objects exist individually, based on the obtained data, tags the division models based on projections of the division models and the two-dimensional regions, and creates the individual models of the objects from the tagged division models.
Abstract: An image processing method and device are disclosed. The method is applicable to an image processing device having an operating system, and the method includes: receiving, by an image processing module of the operating system, an instruction of a first application program to call the image processing module of the operating system, where the instruction carries a to-be-displayed image and a resolution of the to-be-displayed image; and when the resolution of the to-be-displayed image is less than a first threshold, performing, by the image processing module, super-resolution processing on the to-be-displayed image, and displaying an image obtained after the super-resolution processing, to resolve a problem of a low image definition in a conventional display method.
Abstract: Graphics processing systems can include lighting effects when rendering images. “Light probes” are directional representations of lighting at particular probe positions in the space of a scene which is being rendered. Light probes can be determined iteratively, which can allow them to be determined dynamically, in real-time over a sequence of frames. Once the light probes have been determined for a frame then the lighting at a pixel can be determined based on the lighting at the nearby light probe positions. Pixels can then be shaded based on the lighting determined for the pixel positions.
Abstract: A user may create an avatar and/or animated sequence illustrating a particular object or living being performing a certain activity, using images of portions of the object or living being extracted from a still image or set of still images of the object or living being.
Abstract: Improved techniques for generating a depth map are disclosed herein. Initially, a stereo pair of images comprising a first and second image are obtained. Both an overlap region and a non-overlap region are identified as between these two images. A depth map is generated based on the stereo pair of images. Generating this depth map is performed by determining, for the overlap region, depths for a portion of an environment represented by the overlap region via stereo matching. The generation process is also performed by determining, for the non-overlap region, depths for a portion of the environment represented by the non-overlap region by acquiring depth information from a source different from the stereo pair of images.
Type:
Grant
Filed:
May 12, 2020
Date of Patent:
September 21, 2021
Assignee:
Microsoft Technology Licensing, LLC
Inventors:
Christopher Douglas Edmonds, Michael Bleyer, Raymond Kirk Price
Abstract: According to at least one aspect, a system is provided. The system comprises at least one hardware processor; and at least one non-transitory computer-readable storage medium storing processor executable instructions that, when executed by the at least one hardware processor, cause the at least one hardware processor to perform: generating a 3-dimensional (3D) model of an object at least in part by analyzing a first plurality of images of the object captured using a first scanning device; generating a texture model of a texture of a material at least in part by analyzing a second plurality of images of the material captured using a second scanning device different from the first scanning device, the material being separate and distinct from the object; and applying the texture model to the 3D model to generate a textured 3D model of the object.
Type:
Grant
Filed:
November 13, 2019
Date of Patent:
September 21, 2021
Assignee:
Wayfair LLC
Inventors:
Michael Silvio Festa, Rebecca W. Perry, Patrick G. Clark
Abstract: In certain embodiments, one or more images of an object may be received from a device associated with a first account on a communications network. Features of the object may be extracted based on the one or more images, and one or more content items related to the object may be determined based on the features. A hashtag associated with at least one of the features may be determined. A second account connected to the first account may be selected where the second account previously performed a search for the hashtag on the communications network, and at least one of the one or more content items may be provided to the second account.
Abstract: Examples are described here that can be used to enable a main routine to request subroutines or other related code to be executed with other instantiations of the same subroutine or other related code for parallel execution. A sorting unit can be used to accumulate requests to execute instantiations of the subroutine. The sorting unit can request execution of a number of multiple instantiations of the subroutine corresponding to a number of lanes in a SIMD unit. A call stack can be used to share information to be accessed by a main routine after execution of the subroutine completes.
Type:
Grant
Filed:
November 13, 2018
Date of Patent:
August 31, 2021
Assignee:
Intel Corporation
Inventors:
John G. Gierach, Karthik Vaidyanathan, Thomas F. Raoux
Abstract: A display system includes a recognizer configured to recognize a surrounding situation of a vehicle, a display configured to display images, and a display controller configured to cause the display to display an image representing a road shape around the vehicle recognized by the recognizer, wherein, when a lane in which the vehicle can travel in the same direction as a travel direction of the vehicle is added, the display controller causes the display to display an image about the added lane outside the image representing the road shape on the side on which the lane is added before the display is caused to display an image representing a shape of the added lane.
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for calculating graphics resources for a virtual machine. One of the methods includes determining resources available on a graphics card device included in a computer executing a plurality of virtual machines, each virtual machine configured to execute a virtual desktop; determining, based on data received from a hypervisor that manages execution of at least one of the plurality of virtual machines, a graphics profile for a virtual machine included in the plurality of virtual machines executing on the computer; determining a portion of the available resources on the graphics card device allocated to the virtual machine using the graphics profile; and computing an amount of resources on the graphics card device consumed by a virtual desktop of the virtual machine based on the portion of the available resources on the graphics card device allocated to the virtual machine.
Abstract: In some implementations, a computing device can simulate a virtual parallax to create three dimensional effects. For example, the computing device can obtain an image captured at a particular location. The captured two-dimensional image can be applied as texture to a three-dimensional model of the capture location. To give the two-dimensional image a three-dimensional look and feel, the computing device can simulate moving the camera used to capture the two-dimensional image to different locations around the image capture location to generate different perspectives of the textured three-dimensional model as if captured by multiple different cameras. Thus, a virtual parallax can be introduced into the generated imagery for the capture location. When presented to the user on a display of the computing device, the generated imagery may have a three-dimensional look and feel even though generated from a single two-dimensional image.
Type:
Grant
Filed:
May 28, 2020
Date of Patent:
August 24, 2021
Assignee:
Apple Inc.
Inventors:
Gunnar Martin Byrod, Jan H. Bockert, Johan V. Hedberg, Ross W. Anderson
Abstract: A pose of an eye of a user is determined by providing a parameterized 3D model of the eye, said model including a set of parameters which have been calibrated, acquiring (step S11) at least one tracking image of the eye, identifying (step S12) a plurality of characteristic features in the acquired tracking image, fitting (step S13) said characteristic features with corresponding features of an optical projection of the calibrated 3D model, thereby forming a set of equations, and numerically solving (step S14) the set of equations to determine the pose of the eye.
Abstract: In certain embodiments, vision defect information may be generated via a dynamic eye-characteristic-based fixation point. In some embodiments, a first stimulus may be displayed at a first location on a user interface based on a fixation point for a visual test presentation. The fixation point for the visual test presentation may be adjusted during the visual test presentation based on eye characteristic information related to a user. As an example, the eye characteristic information may indicate a characteristic of an eye of the user that occurred during the visual test presentation. A second stimulus may be displayed during the visual test presentation at a second interface location on the user interface based on the adjusted fixation point for the visual test presentation. Vision defect information associated with the user may be generated based on feedback information indicating feedback related to the first stimulus and feedback related to the second stimulus.
Abstract: According to embodiments, an image drawing apparatus includes: an SRAM; and a transaction conversion unit configured to convert a transaction based on a virtual address indicating a pixel position in a storage area of the SRAM into a transaction based on a physical address in the SRAM. When the storage area is divided into a plurality of windows in a row direction and a column direction so that each window includes one or more lines, and an assigned area which is assigned the physical address in the SRAM is set in each of the windows, the transaction conversion unit converts the transaction based on the virtual address into the transaction based on the physical address based on whether the pixel position indicated by the virtual address is in the assigned area.
Abstract: A system and method include generating, by a map labeling control unit including a processor, label candidates for areas of an airport map, evaluating, by the map labeling control unit, the label candidates to determine positions of labels for the areas of the airport map, and determining, by the map labeling control unit, different sets of labels to display for different views of the airport map.
Abstract: Systems and techniques for processing and/or transmitting three-dimensional (3D) data are presented. A partitioning component receives captured 3D data associated with a 3D model of an interior environment and partitions the captured 3D data into at least one data chunk associated with at least a first level of detail and a second level of detail. A data component stores 3D data including at least the first level of detail and the second level of detail for the at least one data chunk. An output component transmits a portion of data from the at least one data chunk that is associated with the first level of detail or the second level of detail to a remote client device based on information associated with the first level of detail and the second level of detail.
Type:
Grant
Filed:
March 10, 2020
Date of Patent:
August 17, 2021
Assignee:
Matterport, Inc.
Inventors:
Matthew Tschudy Bell, David Alan Gausebeck, Gregory William Coombe, Daniel Ford
Abstract: Examples of the disclosure describe systems and methods for sharing perspective views of virtual content. In an example method, a virtual object is presented, via a display, to a first user. A first perspective view, based on a position of the virtual object and a first position of the first user, of the virtual object is determined. The virtual object is presented, via a display, to a second user, wherein the virtual object is presented to the second user according to the first perspective view. An input is received from the first user. A second perspective view, based on the input from the first user, of the virtual object is determined. The virtual object is presented, via a display, to the second user, wherein presenting the virtual object to the second user comprises presenting a transition from the first perspective view to the second perspective view.
Abstract: A support method, system, and computer program product, include identifying a repair or a maintenance task, determining an expertise level of the local engineer, receiving a procedure for performing the repair or the maintenance task, the procedure including a series of steps, instructional information associated with each step, and technical information associated with the repair or the maintenance task, based on the expertise level of each engineer, filtering the instructional information, and the technical information in the procedure to exclude information already known by the engineer, presenting, via an augmented reality device, a first step of the series of steps and the associated filtered information.
Type:
Grant
Filed:
October 23, 2018
Date of Patent:
August 17, 2021
Assignee:
INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventors:
Jefferson Tan, Hidemasa Muta, Bruno de Assis Marques, Sengor Kusturica