Patents Issued in November 2, 2021
-
Patent number: 11164339Abstract: A method, system and computer readable instructions for video encoding comprising, determining one or more region of interest (ROI) parameters for pictures in a picture stream and a temporal down sampling interval. One or more areas outside the ROI in a picture in the picture stream are temporally down sampled according to the interval. The resulting temporally down sampled picture is then encoded and the encoded temporally down-sampled picture is transmitted. Additionally, a picture encoded in this way in an encoded picture stream may be decoded and areas outside an ROI of the picture may be temporally up sampled. The temporally up sampled areas outside the ROI are inserted into the decoded encoded picture stream.Type: GrantFiled: December 19, 2019Date of Patent: November 2, 2021Assignee: Sony Interactive Entertainment Inc.Inventors: Rathish Krishnan, Jason N. Wang
-
Patent number: 11164340Abstract: An artificial intelligence (AI) decoding method including obtaining image data generated from performing first encoding on a first image and AI data related to AI down-scaling of at least one original image related to the first image; obtaining a second image corresponding to the first image by performing first decoding on the image data; obtaining, based on the AI data, deep neural network (DNN) setting information for performing AI up-scaling of the second image; and generating a third image by performing the AI up-scaling on the second image via an up-scaling DNN operating according to the obtained DNN setting information. The DNN setting information is DNN information updated for performing the AI up-scaling of at least one second image via joint training of the up-scaling DNN and a down-scaling DNN used for the AI down-scaling.Type: GrantFiled: February 19, 2021Date of Patent: November 2, 2021Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Jongseok Lee, Jaehwan Kim, Youngo Park
-
Patent number: 11164341Abstract: The exemplary embodiments disclose a method, a computer program product, and a computer system for identifying one or more objects of interest in augmented reality. The exemplary embodiments may include detecting one or more cues selected from a group comprising one or more audio cues and one or more visual cues, identifying one or more objects of interest based on the detected one or more cues and a model, and emphasizing the one or more objects of interest within an augmented reality.Type: GrantFiled: August 29, 2019Date of Patent: November 2, 2021Assignee: International Business Machines CorporationInventors: H. Ramsey Bissex, Ernest Bernard Williams, Jr., Zachary James Goodman, Jeremy R. Fox
-
Patent number: 11164342Abstract: Methods and devices for generating hardware compatible compressed textures may include accessing, at runtime of an application program, graphics hardware incompatible compressed textures in a format incompatible with a graphics processing unit (GPU). The methods and devices may include converting the graphics hardware incompatible compressed textures directly into hardware compatible compressed textures usable by the GPU using a trained machine learning model.Type: GrantFiled: December 2, 2019Date of Patent: November 2, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Martin Jon Irwin Fuller, Daniel Gilbert Kennett
-
Patent number: 11164343Abstract: Techniques are disclosed for populating a region of an image with a plurality of brush strokes. For instance, the image is displayed, with the region of the image bounded by a boundary. A user input is received that is indicative of a user-defined brush stroke within the region. One or more synthesized brush strokes are generated within the region, based on the user-defined brush stroke. In some examples, the one or more synthesized brush strokes fill at least a part of the region of the image. The image is displayed, along with the user-defined brush stroke and the one or more synthesized brush strokes within the region of the image.Type: GrantFiled: October 10, 2020Date of Patent: November 2, 2021Assignee: Adobe Inc.Inventors: Vineet Batra, Praveen Kumar Dhanuka, Nathan Carr, Ankit Phogat
-
Patent number: 11164344Abstract: A system and method include execution of a first scan to acquire a first PET dataset, back-projection of the first PET dataset to generate a first histo-image, input of the first histo-image to a trained neural network, and reception of a first output image from the trained neural network.Type: GrantFiled: October 3, 2019Date of Patent: November 2, 2021Assignee: Siemens Medical Solutions USA, Inc.Inventors: William Whiteley, Vladimir Y. Panin, Deepak Bharkhada
-
Patent number: 11164345Abstract: A method for generating attenuation map is disclosed. The method includes acquiring an anatomic image and PET data indicative of a subject, wherein the anatomic image comprises a plurality of voxels. The method also includes fetching a reference image to register the anatomic image, the reference image includes voxel segmentation information. The method further includes segmenting the anatomic image into a plurality of regions based on the voxel segmentation information. The method further includes generating a first attenuation map corresponding to the anatomic image by assigning attenuation coefficients to the plurality of regions. The method further includes calculating a registration accuracy between the anatomic image and the reference image. The method further includes determining a probability distribution of attenuation coefficient.Type: GrantFiled: June 3, 2019Date of Patent: November 2, 2021Assignee: SHANGHAI UNITED IMAGING HEALTHCARE CO., LTD.Inventors: Tao Feng, Wentao Zhu, Hongdi Li, Miaofei Han
-
Patent number: 11164346Abstract: Image reconstruction can include using a statistical or machine learning, MAP estimator, or other reconstruction technique to produce a reconstructed image from acquired imaging data. A Conditional Generative Adversarial Network (CGAN) technique can be used to train a Generator, using a Discriminator, to generate posterior distribution sampled images that can be displayed or further processed such as to help provide uncertainty information about a mean reconstruction image. Such uncertainty information can be useful to help understand or even visually modify the mean reconstruction image. Similar techniques can be used in a segmentation use-case, instead of a reconstruction use case. The uncertainty information can also be useful for other post-processing techniques.Type: GrantFiled: May 29, 2020Date of Patent: November 2, 2021Assignee: Elekta AB (publ)Inventors: Jonas Anders Adler, Ozan Öktem
-
Patent number: 11164347Abstract: An information processing apparatus according to one aspect of the present technology acquires a viewing log of a content item including at least one of a moving image or a sound, acquires display data representing details of the content item at each time, and displays a chart representing a transition of a viewing state of the content item specified on the basis of the viewing log, and the display data representing details of the content item at each time in a time zone for which the transition of the viewing state is represented by the chart, on a screen that is one and the same. The present technology can be applied to a system used for monitoring the viewing state of the content item.Type: GrantFiled: October 25, 2017Date of Patent: November 2, 2021Assignee: Saturn Licensing LLCInventors: Akihiko Ito, Tsuyoshi Takagi
-
Patent number: 11164348Abstract: Systems and methods are provided for performing temporal graph computing. One method may comprise receiving an input temporal graph that have a plurality of edges with each edge connecting from one vertex instance to another vertex instance, generating in-vertices and out-vertices for each vertex instance, merging the in-vertices and out-vertices into hub vertices for each vertex instance and generating a directed acyclic graph (DAG), receiving a minimum path problem, and scanning the DAG once to provide a solution to the minimum path problem. The merging of vertices and generation of the DAG may be performed by sorting all out-vertices using a 2-dimensional radix sort, generating a respective set of hub vertices for each vertex instance, relabeling the in-vertices and the out-vertices to their respective hub vertices for each vertex instance by a parallel binary search and updating edges affected by the relabeling, and assembling relabeled edges and vertices.Type: GrantFiled: June 29, 2020Date of Patent: November 2, 2021Assignee: TSINGHUA UNIVERSITYInventors: Kang Chen, Yongwei Wu, Chengying Huan, Mengxing Liu, Jinlei Jiang
-
Patent number: 11164349Abstract: Visualization of time series data includes sending to the data source a request for visualizing sensor data within a canvas having a width of w pixels and covering a visualization time range, each pixel of the canvas being representative of a time duration Iopt, receiving consecutive sets of tuples that each covers a time interval having the time duration It, performing a M4 aggregation comprising generating, from the received tuples, a set of consecutive w groups, each group of the w groups comprising tuples covering a time interval having the time duration Iopt, and determining for each group of the w groups a set of aggregates, and displaying the w sets of aggregates on the canvas of the browser window as a chart, wherein each one of the sets of aggregates is displayed in one of pixel columns of the canvas.Type: GrantFiled: October 22, 2020Date of Patent: November 2, 2021Assignee: SAP SEInventors: Sven Peterson, Linda Jahn, Janick Frasch, Ralf Schoenfeld, Florian Weigold, Alexandru Radu, Jan Gottweiss, Ralf Vath, Axel Kuhle, Lukas Brinkmann
-
Patent number: 11164350Abstract: Systems and methods for creating filtered data using graphical methodology. Stored data relationally-linked by an ontology are representable in rows and columns format. The system receives a first input selecting a first data source, displays a portion of the first data source in a first chart, receives a second input identifying a portion of the first chart, generates a first filter based on the identified portion, receives a third input selecting a linked object set, displays an indicator of the linked object set in a second sidebar, displays a portion of the linked object set in a second chart depicting information of the linked object set filtered by the first filter, receives a fourth input identifying a portion of the second chart, generates a second filter based on the identified portion, and displays fields of the linked object set, filtered by the first and second filter, in a third chart.Type: GrantFiled: November 4, 2020Date of Patent: November 2, 2021Assignee: PALANTIR TECHNOLOGIES INC.Inventors: Daniel Cervelli, Timothy Slatcher, Adam Storr
-
Patent number: 11164351Abstract: System, method, and media for an augmented reality interface for sensor applications. Machines making up a particular production or processing facility are instrumented with one or more sensors for monitoring their operation and status and labeled with machine-readable tags. When viewed by a technician through an augmented reality display, the machine-readable tags can be recognized using a computer-vision system and the associated machines can then be annotated with the relevant sensor and diagnostic data. The sensors may further form a mesh network in communication with a head-mounted display for an augmented reality system, eliminating the needs for centralized networking connections.Type: GrantFiled: March 2, 2017Date of Patent: November 2, 2021Assignee: LP-Research Inc.Inventors: Zhuohua Lin, Klaus Petersen, Tobias Schlüter, Huei Ee Yap, Scean Monti Mitchell
-
Patent number: 11164352Abstract: Methods and apparatus relating to techniques for provision of low power foveated rendering to save power on GPU (Graphics Processing Unit) and/or display are described. In various embodiment, brightness/contrast, color intensity, and/or compression ratio applied to pixels in a fovea region are different than those applied in regions surrounding the fovea region. Other embodiments are also disclosed and claimed.Type: GrantFiled: April 21, 2017Date of Patent: November 2, 2021Assignee: INTEL CORPORATIONInventors: Prasoonkumar Surti, Wenyin Fu, Nikos Kaburlasos, Jacek Kwiatkowski, Travis T. Schluessler, John H. Feit, Joydeep Ray
-
Patent number: 11164353Abstract: Systems and methods described herein provide for retrieving, from a storage device, first image data previously captured by a client device. The systems and methods further detect a selection of a first image processing operation and perform the first image processing operation on the first image data to generate second image data. The systems and methods further detect a selection of a second image processing operation and perform the second image processing operation on the second image data to generate third image data. The systems and methods generate a message comprising the third image data.Type: GrantFiled: December 31, 2019Date of Patent: November 2, 2021Assignee: Snap Inc.Inventors: Jean Luo, Oleksandr Grytsiuk, Chenguang Liu, Oleksii Gordiienko
-
Patent number: 11164354Abstract: An image signal is received from a camera and a signal is received from each of a number of sensors. For each sensor, a position of a signal source is estimated based on the signal from the sensor. For each sensor, information on a situation expression expressing a situation is extracted based on the signal from the sensor. The situation expression for each sensor is superimposed on a captured image by the camera and the captured image superimposed with the situation expression for each sensor is output. In a case in which the sensors are located at different positions and the situation expression for each sensor overlap at least partially on the captured image, an overlapping order of the situation expressions for the sensors superimposed on the captured image is determined based on a distance between the camera and each sensor.Type: GrantFiled: February 3, 2020Date of Patent: November 2, 2021Assignee: NEC CORPORATIONInventor: Masumi Ishikawa
-
Patent number: 11164355Abstract: Systems and methods for editing an image based on multiple constraints are described. Embodiments of the systems and methods may identify a change to a vector graphics data structure, generate an update for the vector graphics data structure based on strictly enforcing a handle constraint, a binding constraint, and a continuity constraint, adjust the vector graphics data structure sequentially for each of a plurality of sculpting constraints according to a priority ordering of the sculpting constraints, generate an additional update for the vector graphics data structure based on strictly enforcing the binding constraint and the continuity constraint and approximately enforcing the handle constraint and the sculpting constraints, adjust the vector graphics data structure sequentially for each of a plurality of sculpting constraints, and display the vector graphic based on the adjusted vector graphics data structure.Type: GrantFiled: April 23, 2020Date of Patent: November 2, 2021Assignee: ADOBE INC.Inventors: Ankit Phogat, Kevin Wampler, Wilmot Li, Matthew David Fisher, Vineet Batra, Daniel Kaufman
-
Patent number: 11164356Abstract: Systems, devices, and methods provide an augmented reality visualization of a real world accident scene. The system may comprise an augmented reality visualization device in communication with a display interface. The display interface may be configured to present a graphical user interface including an accident scene that corresponds to a real world location. The augmented reality visualization device (and/or system) may comprise one or more data stores configured to store accident scene information corresponding to the real world location, such as vehicles, motorcycles, trees, road objects, etc. Additionally, the one or more data stores may also store participant information corresponding to details about other multiple participants, such as witnesses, other drivers, and/or police officers.Type: GrantFiled: May 27, 2020Date of Patent: November 2, 2021Assignee: Allstate Insurance CompanyInventors: Chase Davis, Jeraldine Dahlman
-
Patent number: 11164357Abstract: A method, a computer-readable medium, and an apparatus are provided. The apparatus may be configured to receive information indicative of a fovea region. The apparatus may be configured to identify, based on the information indicative of the fovea region, high priority bins and low priority bins. The apparatus may be configured to determine a rendering time allotment for the frame. The apparatus may be configured to determine that the rendering time allotment for the frame will be exceeded, based on an amount of time used to render the high priority bins and the low priority bins. The apparatus may be configured to render, based on the determination that the rendering time allotment for the frame will be exceeded, at least one of the low priority bins at a first quality instead of a second quality.Type: GrantFiled: March 30, 2020Date of Patent: November 2, 2021Assignee: QUALCOMM IncorporatedInventors: Samuel Benjamin Holmes, Tate Hornbeck, Robert Vanreenen
-
Patent number: 11164358Abstract: The present invention discloses a method for real-time rendering of giga-pixel images. Image data are subject to offline pre-processing, and then are subject to data decoding and redirection through a decoding module. A corresponding scheduling strategy is determined according to different image inputs, and rendering is executed by a renderer. The present invention may realize the real-time rendering of a giga-pixel panoramic view in a conventional display device, to greatly reduce the resource allocated for rendering of giga-pixel images. The present invention may render an image originally requiring a 40G or more video memory capacity on a common video card with a 1G-4G video memory capacity.Type: GrantFiled: September 23, 2019Date of Patent: November 2, 2021Assignee: PLEX-VR DIGITAL TECHNOLOGY (SHANGHAI) CO., LTD.Inventors: Wentao Lyu, Yingliang Zhang, Anpei Chen, Minye Wu
-
Patent number: 11164359Abstract: Apparatus and method for encoding sub-primitives to improve ray tracing efficiency. For example, one embodiment of an apparatus comprises: a ray generator to generate a plurality of rays in a ray tracing graphics pipeline; a sub-primitive generator to subdivide each primitive of a plurality of primitives into a plurality of sub-primitives; a sub-primitive encoder to identify a first subset of the plurality of sub-primitives as being fully transparent and to identify a second subset of the plurality of sub-primitives as being fully opaque; and wherein the first subset of the plurality of primitives identified as being fully transparent are culled prior to further processing of each respective primitive.Type: GrantFiled: December 27, 2019Date of Patent: November 2, 2021Assignee: INTEL CORPORATIONInventor: Holger Gruen
-
Patent number: 11164360Abstract: A hardware-based traversal coprocessor provides acceleration of tree traversal operations searching for intersections between primitives represented in a tree data structure and a ray. The primitives may include opaque and alpha triangles used in generating a virtual scene. The hardware-based traversal coprocessor is configured to determine primitives intersected by the ray, and return intersection information to a streaming multiprocessor for further processing. The hardware-based traversal coprocessor is configured to provide a deterministic result of intersected triangles regardless of the order that the memory subsystem returns triangle range blocks for processing, while opportunistically eliminating alpha intersections that lie further along the length of the ray than closer opaque intersections.Type: GrantFiled: July 2, 2020Date of Patent: November 2, 2021Assignee: NVIDIA CorporationInventors: Samuli Laine, Tero Karras, Greg Muthler, William Parsons Newhall, Ronald Charles Babich, Ignacio Llamas, John Burgess
-
Patent number: 11164361Abstract: Techniques are described for using computing devices to perform automated operations for analyzing video (or other image sequences) acquired in a defined area, as part of generating mapping information of the defined area for subsequent use (e.g., for controlling navigation of devices, for display on client devices in corresponding GUIs, etc.). The defined area may include an interior of a multi-room building, and the generated information may include a floor map of the building, such as from an analysis of some or all image frames of the video (e.g., 360° image frames from 360° video) using structure-from-motion techniques to identify objects with associated plane and normal orthogonal information, and then clustering detected planes and/or normals from multiple analyzed images to determine likely wall locations. The generating may be further performed without using acquired depth information about distances from the video capture locations to objects in the surrounding building.Type: GrantFiled: October 26, 2020Date of Patent: November 2, 2021Assignee: Zillow, Inc.Inventors: Pierre Moulon, Ivaylo Boyadzhiev
-
Patent number: 11164362Abstract: Systems and methods for generating virtual reality user interfaces are described. The virtual reality user interface may include a three-dimensional model that simulates an actual environment. In addition, the virtual reality user interface may include a plurality of cells arranged at a simulated depth and with a simulated curvature. Further, the plurality of cells may be divided into a plurality of subcells. The subcells may be sized based at least in part on aspect ratios of images to be included in each of the subcells. Moreover, supplemental cells may be provided around or within the plurality of cells and subcells, each of the supplemental cells representing a collection of items. The variable sizing of the subcells as well as the incorporation of supplemental cells around or within the plurality of cells and subcells may result in a virtual reality user interface with higher user interest and engagement.Type: GrantFiled: February 24, 2020Date of Patent: November 2, 2021Assignee: Amazon Technologies, Inc.Inventors: Michael Tedesco, David Robert Cole, Lane Daughtry
-
Patent number: 11164363Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing point cloud data using dynamic voxelization. When deployed within an on-board system of a vehicle, processing the point cloud data using dynamic voxelization can be used to make autonomous driving decisions for the vehicle with enhanced accuracy, for example by combining representations of point cloud data characterizing a scene from multiple views of the scene.Type: GrantFiled: July 8, 2020Date of Patent: November 2, 2021Assignee: Waymo LLCInventors: Yin Zhou, Pei Sun, Yu Zhang, Dragomir Anguelov, Jiyang Gao, Yu Ouyang, Zijian Guo, Jiquan Ngiam, Vijay Vasudevan
-
Patent number: 11164364Abstract: Methods and coarse depth test logic perform coarse depth testing in a graphics processing system in which a rendering space is divided into a plurality of tiles. A depth range for a tile is obtained, which identifies a depth range based on primitives previously processed for the tile. A determination is made based on the depth range for the tile as to whether all or a portion of a primitive is hidden in the tile. If at least a portion of the primitive is not hidden in the tile, a determination is as to whether the primitive, or one or more primitive fragments thereof has better depth than the primitives previously processed for the tile according to a depth compare mode. If so, the primitive or the primitive fragment is identified as not requiring a read of a depth buffer to perform full resolution depth testing, such that a determination that at least a portion of the primitive is hidden in the tile causes full resolution depth testing not to be performed on at least that portion of the primitive.Type: GrantFiled: June 19, 2020Date of Patent: November 2, 2021Assignee: Imagination Technologies LimitedInventors: Lorenzo Belli, Robert Brigg
-
Patent number: 11164365Abstract: A graphics processing system has a rendering space which is divided into tiles. Primitives within the tiles are processed to perform hidden surface removal and to apply texturing to the primitives. The graphics processing system includes a plurality of depth buffers, thereby allowing a processing module to process primitives of one tile by accessing one of the depth buffers while primitive identifiers of another, partially processed tile are stored in another one of the depth buffers. This allows the graphics processing system to have “multiple tiles in flight”, which can increase the efficiency of the graphics processing system.Type: GrantFiled: November 12, 2020Date of Patent: November 2, 2021Assignee: Imagination Technologies LimitedInventor: Jonathan Redshaw
-
Patent number: 11164366Abstract: Implementations of the subject matter described herein relate to mixed reality rendering of objects. According to the embodiments of the subject matter described herein, while rendering an object, a wearable computing device takes lighting conditions in the real world into account, thereby increasing the reality of the rendered object. In particular, the wearable computing device acquires environment lighting information of an object to be rendered and renders the object to a user based on the environment lighting information. In this way, the object rendered by the wearable computing device can be more real and accurate. The user will thus have a better interaction experience.Type: GrantFiled: January 16, 2018Date of Patent: November 2, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Guojun Chen, Yue Dong, Xin Tong
-
Patent number: 11164367Abstract: Systems and methods for generating illumination effects for inserted luminous content, which may include augmented reality content that appears to emit light and is inserted into an image of a physical space. The content may include a polygonal mesh, which may be defined in part by a skeleton that has multiple joints. Examples may include generating a bounding box on a surface plane for the inserted content, determining an illumination center point location on the surface plane based on the content, generating an illumination entity based on the bounding box and the illumination center point location, and rendering the illumination entity using illumination values determined based on the illumination center point location. Examples may also include determining illumination contributions values for some of the joints, combining the illumination contribution values to generate illumination values for pixels, and rendering another illumination entity using the illumination values.Type: GrantFiled: July 17, 2019Date of Patent: November 2, 2021Assignee: Google LLCInventors: Ivan Neulander, Mark Dochtermann
-
Patent number: 11164368Abstract: Using computing devices to perform automated operations related to, with respect to a computer model of a house or other building's interior, generating and displaying simulated lighting information in the model based on sunlight or other external light that is estimated to enter the building and be visible in particular rooms of the interior under specified conditions, such as using ambient occlusion and light transport matrix calculations. The computer model may be a 3D (three-dimensional) or 2.5D representation that is generated after the house is built and that shows physical components of the actual house's interior (e.g., walls), and may be displayed to a user of a client computing device in a displayed GUI (graphical user interface) via which the user specifies conditions for which the simulated lighting display is generated.Type: GrantFiled: April 6, 2020Date of Patent: November 2, 2021Assignee: Zillow, Inc.Inventors: Joshuah Vincent, Pierre Moulon, Ivaylo Boyadzhiev, Joshua David Maruska
-
Patent number: 11164369Abstract: A method and a system for generating a mesh representation of a surface. The method includes receiving a three-dimensional (3D) point cloud representing the surface, identifying and discarding one or more outliers in the 3D point cloud to generate a filtered point cloud using a Gaussian process, adding one or more additional points to the filtered point cloud to generate a reconstruction dataset, and using Poisson surface reconstruction to generate an implicit surface corresponding to the surface from the reconstruction dataset.Type: GrantFiled: December 20, 2019Date of Patent: November 2, 2021Assignee: Argo AI, LLCInventors: Xiaoyan Hu, Michael Happold, Joshua Max Manela, Guy Hotson
-
Patent number: 11164370Abstract: An information processing apparatus includes: a memory; and a processor configured to: store data of images captured of a structure at positions changed relative to the structure and data of a three-dimensional model indicating a three-dimensional shape of the structure; select an image containing the largest image of a damaged portion in the structure as a first accumulated image in each of which the damaged portion is imaged among the images; and perform a selection process of selecting second accumulated images except the first accumulated image such that regarding an imaging range that is not covered by the first accumulated image in the three-dimensional model, an imaging range which overlaps between the second accumulated images is reduced, and a coverage ratio of the imaging range in the three-dimensional model with the selected second accumulated images and the first accumulated image is equal to or more than a predetermined value.Type: GrantFiled: March 9, 2020Date of Patent: November 2, 2021Assignee: FUJITSU LIMITEDInventor: Eiji Hasegawa
-
Patent number: 11164371Abstract: Described embodiments include a system that includes an electrical interface and a processor. The processor is configured to receive, via the electrical interface, an electrocardiographic signal from an electrode within a heart of a subject, to ascertain a location of the electrode in a coordinate system of a computerized model of a surface of the heart, to select portions of the model responsively to the ascertained location, such that the selected portions are interspersed with other, unselected portions of the model, and to display the model such that the selected portions, but not the unselected portions, are marked to indicate a property of the signal. Other embodiments are also described.Type: GrantFiled: December 20, 2017Date of Patent: November 2, 2021Assignee: Biosense Webster (Israel) Ltd.Inventors: Tamir Avraham Yellin, Roy Urman
-
Patent number: 11164372Abstract: The disclosure introduces polar stroking for representing paths. A system, method, and apparatus are disclosed for representing and rendering stroked paths employing polar stroking. In one example, a method of approximating a link of a path is provided that includes: (1) determining tangent angle changes of a link of a path, (2) evaluating the link in steps based on the tangent angle changes, and (3) providing a polar stroked representation of the link employing the steps, wherein the evaluating is performed non-recursively. A polar stroking system is also disclosed. In one example, the polar stroking system includes: (1) a path processor configured to decompose a path into links, and (2) a polar stroking processor configured to determine polar stroking intermediates of the links from a characterization of the links and generate, employing the polar stroking intermediates, a polar stroked representation for each of the links.Type: GrantFiled: December 13, 2019Date of Patent: November 2, 2021Assignee: Nvidia CorporationInventor: Mark Kilgard
-
Patent number: 11164373Abstract: A graphics processing apparatus includes a tessellation circuit and a post-processing circuit. The tessellation circuit performs tessellation processing to subdivide a patch in an image frame into a plurality of triangles. The tessellation circuit further performs triangle striping processing to convert data of the plurality of triangles into data of a triangle strip. The post-processing circuit performs subsequent processing on the data of the triangle strip.Type: GrantFiled: May 5, 2020Date of Patent: November 2, 2021Assignee: GlenFly Technology Co., Ltd.Inventors: Huaisheng Zhang, Maoxin Sun, Juding Zheng
-
Patent number: 11164374Abstract: Disclosed herein is a mesh model generation method including generating time-series temporary mesh models representing an object with a changing shape from a plurality of pieces of point cloud data of the object acquired in a time-series manner, performing discretization analysis on the temporary mesh models, and generating a final mesh model by optimizing the temporary mesh models based on a time-series shape change of the object indicated by a result of the discretization analysis.Type: GrantFiled: May 19, 2020Date of Patent: November 2, 2021Assignee: SONY INTERACTIVE ENTERTAINMENT INC.Inventor: Akira Nishiyama
-
Patent number: 11164375Abstract: A computer program product may cause one or more processors to generate stereoscopic images of one or more 3D models within a 3D model space. As part of the generation of the stereoscopic images, special case surfaces that are non-flat and specularly reflective or refractive are rendered in a special manner. The special manner involves rendering a texture for the special case surface based on a third projection corresponding to a third viewpoint that is spaced from both a first viewpoint (i.e., a left eye viewpoint) and a second viewpoint (i.e., a right eye viewpoint). Accordingly, when rendering first and second images (i.e., images corresponding respectively to the first and second viewpoints), the texture corresponding to the third viewpoint may be applied to the special case surface in both the first and second images.Type: GrantFiled: October 21, 2020Date of Patent: November 2, 2021Assignee: Tanzle, Inc.Inventors: Joseph L. Grover, Oliver T. Davies
-
Patent number: 11164376Abstract: A shape generation system can generate a three-dimensional (3D) model of an object from a two-dimensional (2D) image of the object by projecting vectors onto light cones created from the 2D image. The projected vectors can be used to more accurately create the 3D model of the object based on image element (e.g., pixel) values of the image.Type: GrantFiled: August 29, 2018Date of Patent: November 2, 2021Assignee: Snap Inc.Inventors: Soumyadip Sengupta, Linjie Luo, Chen Cao, Menglei Chai
-
Patent number: 11164377Abstract: Methods and systems of navigating within a virtual environment are described. In an example, a processor may generate a portal that includes a set of portal boundaries. The processor may display the portal within a first scene of the virtual environment being displayed on a device. The processor may display a second scene of the virtual environment within the portal boundaries. The processor may receive sensor data indicating a movement of a motion controller. The processor may reposition the portal and the second scene in the first scene based on the sensor data, wherein the first scene remains stationary on the device during the reposition of the portal and the second scene. The processor may translate a location of the portal within the first scene to move the portal towards a user of the device until the second scene replaces the first scene being displayed on the device.Type: GrantFiled: May 17, 2018Date of Patent: November 2, 2021Assignee: International Business Machines CorporationInventors: Aldis Sipolins, Lawrence A. Clevenger, Benjamin D. Briggs, Michael Rizzolo, Christopher J. Penny, Patrick Watson
-
Patent number: 11164378Abstract: A Virtual reality system which comprises a head mounted display and positional tracking to determine the position and orientation of the head mounted display. A player wearing the head mounted display would view a virtual world. External physical objects such as a cup can be identified and displayed inside the virtual world displayed inside the head mounted display so that a player can drink out of the cup without having to remove the head mounted display.Type: GrantFiled: December 26, 2018Date of Patent: November 2, 2021Assignee: Out of Sight Vision Systems LLCInventors: Benjamin Cowen, Jon Muskin
-
Patent number: 11164379Abstract: An augmented reality positioning method and apparatus for location-based service LBS, comprising: a first terminal obtains image information captured by a camera, and receives AR information transmitted by a server; the AR information is generated according to location information of a second terminal; the first terminal displays the image information drawn with the AR information. The apparatus avoids the following drawbacks in the prior art: location inaccuracy; when the terminals are close to each other, positioning cannot be performed on the interfaces of the terminals; if the environment is complicated, it is difficult for the user to quickly perform accurate and direct judgment, and even impossible to obtain more accurate mutual suggestive location information.Type: GrantFiled: May 29, 2018Date of Patent: November 2, 2021Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.Inventors: Zhongqin Wu, Miao Yao, Yongjie Zhang
-
Patent number: 11164380Abstract: A head mounted display (HMD) device is provided. The HMD has a display panel, a depth or distance sensor to measure distances between the HMD and a real object. The HMD device sets a close transition boundary distance (CTBD) between the HMD and a close transition boundary (CTB). A far transition boundary distance (FTBD) is set between the HMD and a far transition boundary (FTB). The CTBD is less than the FTBD. As a real object that has associated near and far virtual content moves nearer to the HMD device and crosses the CTB, the virtual content transitions to near virtual content for viewing on the HMD. As the real object moves away from the HMD and crosses the FTB, the virtual content transitions to the far virtual content for viewing on the HMD.Type: GrantFiled: September 10, 2018Date of Patent: November 2, 2021Assignee: Samsung Electronics Co., Ltd.Inventors: Andrew R. McHugh, Duncan Knarr
-
Patent number: 11164381Abstract: An improved system to provide a clothing model in augmented reality and/or virtual reality is disclosed. In one embodiment, the system provides a fully simulated garment, on the user's device, in an augmented reality or virtual reality display. In one embodiment, the process uses a parameterized smart garment model which enables real-time interaction with the garment on a device such as a mobile phone or tablet, with limited memory and processing power. In one embodiment, the system permits nearly instantaneous alteration of garment sizes. Because the garment is modeled to accurately reflect physical details, the lay of the garment changes as the size of the garment or body changes.Type: GrantFiled: November 2, 2018Date of Patent: November 2, 2021Assignee: GERBER TECHNOLOGY LLCInventors: James F O'Brien, David T Jackson, Carlo Camporesi, Daniel Ram, David Macy, Edilson de Aguiar, James L. Andrews, Justin Lee, Karen M Stritzinger, Kevin Armin Samii, Nathan Mitchell, Tobias Pfaff, Scott M Frankel
-
Patent number: 11164382Abstract: Provided is a method, computer program product, and virtual reality (VR) system for altering a VR simulation based on the physical capabilities of a user. A processor may receive one or more personal health factors corresponding to a user. The processor may prepare a VR simulation for a rendering. The processor may identify one or more physical characteristics of the user from the personal health factors. The processor may determine if the one or more physical characteristics affect the interaction of the user within the VR simulation. The processor may modify, in response to determining the one or more physical characteristic affect the interaction of the user within the VR simulation, user input options for interacting with the VR simulation. The processor may modify the rendering of the VR simulation in a display of a VR device based on the modified user input options.Type: GrantFiled: May 21, 2019Date of Patent: November 2, 2021Assignee: International Business Machines CorporationInventors: Trudy L. Hewitt, Robert Huntington Grant, Jeremy R. Fox, Zachary A. Silverstein
-
Patent number: 11164383Abstract: Disclosed is an AR device and method for controlling the same. According to an embodiment of the present disclosure, the method for controlling the AR device computes a distance between the AR device and a capturing device connected with the AR device via wired/wireless communication and receives an angle of the capturing device and computes an angle of the AR device. The method determines information related to a distance to a real object captured by the capturing device, computes an augmented position of a virtual object corresponding to the real object, and displays the virtual object in the augmented position on a display. The AR device of the present disclosure may be associated with an artificial intelligence module, a robot, a virtual reality (VR) device, a device related to a 5G service, and the like.Type: GrantFiled: August 30, 2019Date of Patent: November 2, 2021Assignee: LG ELECTRONICS INC.Inventors: Dongog Min, Daemyeong Park
-
Patent number: 11164384Abstract: A system for replacing physical items in images is discussed. A depicted item can be selected and removed from an image via image mask data and pixel merging techniques. Virtual light source positions can be generated based on real-world light source data from the image. A rendered simulation of a virtual item can then be integrated into the image to create a modified image for display.Type: GrantFiled: July 24, 2019Date of Patent: November 2, 2021Assignee: Houzz, Inc.Inventors: Xiaoyi Huang, Jingwen Wang, Yi Wu, Xin Ai
-
Patent number: 11164385Abstract: A method for establishing a virtual reality (VR) call between a caller VR device and a callee VR device, the method includes determining which of the caller VR device or the callee VR device should perform a stitching operation associated with the VR call based on a first plurality of parameters associated with the callee VR device and a second plurality of parameters associated with the caller VR device, and causing transmission of one of a plurality of media contents or a stitched media content from the caller VR device to the callee VR device after establishment of the VR call based on the determining.Type: GrantFiled: November 4, 2019Date of Patent: November 2, 2021Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Praveen Chebolu, Varun Bharadwaj Santhebenur Vasudevamurthy, Srinivas Chinthalapudi, Tushar Vrind, Abhishek Bhan, Nila Rajan
-
Patent number: 11164386Abstract: A computing device 2, such as a general-purpose smartphone or general-purpose tablet computing device, comprises one or more inertial sensors and an image sensor. The device 2 produces stereoscopic images of a virtual environment on the display during a virtual reality (VR) session controlled by a user of the computing device. The device conducts visual odometry using at least image data received from the image sensor, and selectively activates and deactivates the visual odometry according to activity of the user during the virtual reality session. When the visual odometry is activated, the device controls the virtual reality session by at least position information from the visual odometry. When the visual odometry is deactivated, the device controls the virtual reality session by at least orientation information from the one or more inertial sensors.Type: GrantFiled: March 25, 2020Date of Patent: November 2, 2021Assignee: Arm LimitedInventors: Roberto Lopez Mendez, Daren Croxford
-
Patent number: 11164387Abstract: Described herein are devices, systems, media, and methods using an augmented reality smartphone application to capture measurements of an interior or exterior space in real-time and generate a floorplan of the space and/or a 3D model of the space from the captured measurements in less than 5 minutes.Type: GrantFiled: April 29, 2020Date of Patent: November 2, 2021Assignee: SMART PICTURE TECHNOLOGIES, INC.Inventors: Dejan Jovanovic, Andrew Kevin Greff
-
Patent number: 11164388Abstract: Various embodiments of the present disclosure relate to an electronic device and a method of providing an augmented reality object thereof.Type: GrantFiled: February 21, 2019Date of Patent: November 2, 2021Assignee: Samsung Electronics Co., Ltd.Inventors: Jaehan Lee, Wooyong Lee, Seungpyo Hong, Hoik Hwang