Patents Examined by Kyle Zhai
-
Patent number: 11900543Abstract: A tessellation method uses both vertex tessellation factors and displacement factors defined for each vertex of a patch, which may be a quad, a triangle or an isoline. The method is implemented in a computer graphics system and involves calculating a vertex tessellation factor for each corner vertex in one or more input patches. Tessellation is then performed on the plurality of input patches using the vertex tessellation factors. The tessellation operation involves adding one or more new vertices and calculating a displacement factor for each newly added vertex. A world space parameter for each vertex is subsequently determined by calculating a target world space parameter for each vertex and then modifying the target world space parameter for a vertex using the displacement factor for that vertex.Type: GrantFiled: October 19, 2022Date of Patent: February 13, 2024Assignee: Imagination Technologies LimitedInventors: Peter Malcolm Lacey, Simon Fenney
-
Patent number: 11900533Abstract: An image signal output method including outputting a first image signal to display an eyeball model selection screen for selecting one eyeball model from out of plural eyeball models of different types, converting a two-dimensional fundus image of the subject eye so as to generate a three-dimensional fundus image based on a selected eyeball model, and outputting a second image signal to display a fundus image display screen including the three-dimensional fundus image.Type: GrantFiled: October 11, 2018Date of Patent: February 13, 2024Assignee: NIKON CORPORATIONInventor: Mariko Hirokawa
-
Patent number: 11896315Abstract: A virtual operating room (OR) is generated that includes virtual surgical equipment based on existing OR models and existing equipment models. Sensor feedback is received that defines a physical OR having physical equipment, the physical equipment including a surgical robotic system within the physical OR. The virtual OR and the virtual surgical equipment is updated based on the sensor feedback. A layout of the virtual surgical equipment is optimized in the virtual OR. The virtual OR and the virtual surgical equipment are rendered on a display.Type: GrantFiled: June 13, 2022Date of Patent: February 13, 2024Assignee: VERB SURGICAL INC.Inventors: Bernhard Fuerst, Eric Johnson, Pablo Garcia Kilroy
-
Patent number: 11893657Abstract: A system, a method, and a computer program for transferring a system for a recognition area are provided. The range of an application object including a specific style is expanded from an image to a style of a real object or a style of a specific area included in a photo. In addition, the recognition area limited to a confined photo space is expanded to a real object and a background by using a projector beam. In addition, more various styles are mixed and applied to a painting style image, which is output, or an original image.Type: GrantFiled: November 17, 2021Date of Patent: February 6, 2024Assignee: CoreDotToday Inc.Inventors: Kyung Hoon Kim, Bongsoo Jang
-
Patent number: 11875448Abstract: Disclosed techniques relate to forming single-instruction multiple-data (SIMD) groups during ray intersection traversal. In particular, ray intersection circuitry may include dedicated circuitry configured to traverse an acceleration data structure, but may dynamically form a SIMD group to transform ray coordinates when traversing from one level of the data structure to another. This may allow shader processors to execute the SIMD group to perform the transformation. Disclosed techniques may facilitate instancing of graphics models within the acceleration data structure.Type: GrantFiled: November 24, 2020Date of Patent: January 16, 2024Assignee: Apple Inc.Inventors: Ali Rabbani Rankouhi, Christopher A. Burns, Justin A. Hensley, Jonathan M. Redshaw
-
Patent number: 11872486Abstract: A computer-implemented method for utilizing augmented reality (AR) and gamification to help a user traverse an area that includes hazards. The method includes one or more computer processors receiving at an AR device utilized by a user, visual information corresponding to an area. The method further includes identifying one or more hazards within the area. The method further includes determining a path through the area that the user may traverse to avoid the one or more identified hazards. The method further includes generating a plurality of elements of AR content, where at least a first element of AR content indicates the path for the user to traverse. The method further includes displaying, via the AR device, the received visual information corresponding to the area to include the plurality of elements of AR content.Type: GrantFiled: May 27, 2021Date of Patent: January 16, 2024Assignee: International Business Machines CorporationInventors: Shailendra Moyal, Sarbajit K. Rakshit
-
Patent number: 11869163Abstract: Systems and methods are provided for machine learning-based rendering of a clothed human with a realistic 3D appearance by virtually draping one or more garments or items of clothing on a 3D human body model. The machine learning model may be trained to drape a garment on a 3D body mesh using training data that includes a variety 3D body meshes reflecting a variety of different body types. The machine learning model may include an encoder trained to extract body features from an input 3D mesh, and a decoder network trained to drape the garment on the input 3D mesh based at least in part on spectral decomposition of a mesh associated with the garment. The trained machine learning model may then be used to drape the garment or a variation of the garment on a new input body mesh.Type: GrantFiled: September 17, 2021Date of Patent: January 9, 2024Assignee: Amazon Technologies, Inc.Inventors: Junbang Liang, Ming Lin, Javier Romero Gonzalez-Nicolas, Adam Douglas Peck, Chetan Shivarudrappa
-
Patent number: 11854130Abstract: Methods, apparatus, systems, devices, and computer program products directed to augmenting reality with respect to real-world places, and/or real-world scenes that may include real-world places may be provided. Among the methods, apparatus, systems, devices, and computer program products is a method directed to augmenting reality via a device. The method may include capturing a real-world view that includes a real-world place, identifying the real-world place, determining an image associated with the real-world place familiar to a user of the device viewing the real-world view, and/or augmenting the real-world view that includes the real-world place with the image of the real-world place familiar to a user viewing the real-world view.Type: GrantFiled: January 24, 2015Date of Patent: December 26, 2023Assignee: InterDigital VC Holdings, Inc.Inventor: Mona Singh
-
Patent number: 11842425Abstract: The present disclosure provides an interaction method, an interaction apparatus, an electronic device, and a computer-readable storage medium, which relate to the technical field of image processing. The method includes: displaying a background image; displaying an initial picture of a target visual effect at a preset position of the background image; controlling the target visual effect to gradually change from the initial picture to a target picture in response to a visual effect change instruction triggered by a user; and adjusting a filter effect of the background image to allow the filter effect of the background image to gradually change from a first filter effect to a second filter effect during a change of the target visual effect.Type: GrantFiled: August 19, 2022Date of Patent: December 12, 2023Assignee: BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD.Inventors: Xiaojia Qi, Jie Zheng
-
Patent number: 11830143Abstract: A tessellation method uses tessellation factors defined for each vertex of a patch which may be a quad, a triangle or an isoline. The method is implemented in a computer graphics system and involves comparing the vertex tessellation factors to a threshold. If the vertex tessellation factors for either a left vertex or a right vertex, which define an edge of an initial patch, exceed the threshold, the edge is sub-divided by the addition of a new vertex which divides the edge into two parts and two new patches are formed. New vertex tessellation factors are calculated for each vertex in each of the newly formed patches, both of which include the newly added vertex. The method is then repeated for each of the newly formed patches until none of the vertex tessellation factors exceed the threshold.Type: GrantFiled: June 20, 2022Date of Patent: November 28, 2023Assignee: Imagination Technologies LimitedInventors: Peter Malcolm Lacey, Simon Fenney
-
Patent number: 11830106Abstract: Methods, systems and storage media for applying a pattern application effect to one or more frames of video are disclosed. Some examples may include: obtaining video data including one or more video frames, determining one or more segments in each of the one or more video frames, determining one or more object masks based on the one or more segments in each of the one or more video frames, combining, the one or more object masks into a single mask, obtaining pattern information, the pattern information representing one or more graphical effects to be applied to at least one layer of the one or more video frames, applying the pattern information to the single mask to generate masked pattern information and generating, by the computing device, a rendered video by adding the masked pattern information to the one or more video frames.Type: GrantFiled: November 19, 2021Date of Patent: November 28, 2023Assignee: Lemon Inc.Inventors: Nathan Schager, Yixin Zhao
-
Patent number: 11823336Abstract: In some embodiments, a method comprises obtaining a video stream of a portion of a geographic area, the video stream comprising a plurality of video frames, each of the plurality of video frames captured at a respective first time. Contextual metadata is obtained, the contextual metadata associated with one or more objects located in the portion of the geographic area at a second time, the second time being before each of the respective first times. The contextual metadata is inserted into one or more of the plurality of video frames, thereby causing the contextual metadata associated with the one or more objects to be overlaid on one or more corresponding portions of the one or more of the plurality of video frames.Type: GrantFiled: January 11, 2022Date of Patent: November 21, 2023Assignee: Palantir Technologies Inc.Inventors: Peter Wilczynski, Daniel Cervelli, Andrew Elder, Anand Gupta, Praveen Kumar Ramalingam, Robert Imig
-
Patent number: 11806077Abstract: An ophthalmologic apparatus includes an optical scanner, an interference optical system, an intraocular distance calculator, an image correcting unit, and a controller. The optical scanner is disposed at an optically substantially conjugate position with a first site of a subject's eye. The interference optical system is configured to split light from a light source into reference light and measurement light, to project the measurement light onto the subject's eye via the optical scanner, and to detect interference light between returning light of the light from the subject's eye and the reference light via the optical scanner. The image forming unit is configured to form a tomographic image of the subject's eye corresponding a first traveling direction of the measurement light deflected by the optical scanner, based on a detection result of the interference light.Type: GrantFiled: September 2, 2022Date of Patent: November 7, 2023Assignee: TOPCON CORPORATIONInventors: Tatsuo Yamaguchi, Ryoichi Hirose, Michiko Nakanishi
-
Patent number: 11804022Abstract: Disclosed is a system and associated methods for the generative drawing and customization of three-dimensional (ā3Dā) objects in 3D space using hand gestures. The system adapts the hand gestures as intuitive controls for rapidly creating and customizing the 3D objects to have a desired artistic effect or a desired look. The system selects a 3D model of a particular object in response to a first user input, sets a position in a virtual space at which to generate the particular object in response to a mapped position of a first hand gesture tracked in a physical space, and generates a first state representation of the particular object at the position in the virtual space in response to a second hand gesture. The first state representation presents the particular object at one of different modeled stages of the particular object lifecycle.Type: GrantFiled: March 14, 2023Date of Patent: October 31, 2023Assignee: Illuscio, Inc.Inventor: Kyle Kinkade
-
Patent number: 11756278Abstract: A method for establishing a denture dentition model includes: obtaining face information and teeth alignment information of a use, to establish a three-dimensional mouth-opening face model and an original teeth model, and superimposing the original teeth model to the three-dimensional mouth-opening face model; establishing a reference line corresponding to each of the teeth of the original teeth model; generating a plurality of grids on the three-dimensional mouth-opening face model according to the reference lines, and adjusting each of the grids to correspond to the edge of each tooth, to obtain an actual size of each of the teeth; providing an upper edge curve and a lower edge curve on the three-dimensional mouth-opening face model as a smile curve; aligning each grid with the smile curve, and placing a denture model in each grid; and adjusting a denture contour of each denture model, to generate the denture dentition model.Type: GrantFiled: August 9, 2021Date of Patent: September 12, 2023Assignee: ASUSTEK COMPUTER INC.Inventor: Wei-Po Lin
-
Patent number: 11747627Abstract: Configurations are disclosed for a health system to be used in various healthcare applications, e.g., for patient diagnostics, monitoring, and/or therapy. The health system may comprise a light generation module to transmit light or an image to a user, one or more sensors to detect a physiological parameter of the user's body, including their eyes, and processing circuitry to analyze an input received in response to the presented images to determine one or more health conditions or defects.Type: GrantFiled: September 12, 2022Date of Patent: September 5, 2023Assignee: Magic Leap, Inc.Inventors: Nicole Elizabeth Samee, John Graham Macnamara, Christopher M. Harrises, Brian T. Schowengerdt, Rony Abovitz, Mark Baerenrodt
-
Patent number: 11734876Abstract: A method assigns weights to physical imager pixels in order to generate photorealistic images for virtual perspectives in real-time. The imagers are arranged in three-dimensional space such that they sparsely sample the light field within a scene of interest. This scene is defined by the overlapping fields of view of all the imagers or for subsets of imagers. The weights assigned to imager pixels are calculated based on the relative poses of the virtual perspective and physical imagers, properties of the scene geometry, and error associated with the measurement of geometry. This method is particularly useful for accurately rendering numerous synthesized perspectives within a digitized scene in real-time in order to create immersive, three-dimensional experiences for applications such as performing surgery, infrastructure inspection, or remote collaboration.Type: GrantFiled: February 4, 2022Date of Patent: August 22, 2023Assignee: Proprio, Inc.Inventors: James Andrew Youngquist, David Julio Colmenares, Adam Gabriel Jones
-
Patent number: 11717150Abstract: An ophthalmologic apparatus includes an optical scanner, an interference optical system, an intraocular distance calculator, an image correcting unit, and a controller. The optical scanner is disposed at an optically substantially conjugate position with a first site of a subject's eye. The interference optical system is configured to split light from a light source into reference light and measurement light, to project the measurement light onto the subject's eye via the optical scanner, and to detect interference light between returning light of the light from the subject's eye and the reference light via the optical scanner. The image forming unit is configured to form a tomographic image of the subject's eye corresponding a first traveling direction of the measurement light deflected by the optical scanner, based on a detection result of the interference light.Type: GrantFiled: May 28, 2020Date of Patent: August 8, 2023Assignee: TOPCON CORPORATIONInventors: Tatsuo Yamaguchi, Ryoichi Hirose, Michiko Nakanishi
-
Patent number: 11712162Abstract: A system for testing and/or training the vision of a user is disclosed herein. The system includes at least one camera, a visual display device having an output screen, and a data processing device operatively coupled to the at least one camera and the visual display device. In one embodiment, the data processing device is programmed to determine a head position, head velocity, and/or head speed of a user during a vision test or vision training routine from a plurality of images of the head of the user captured by the at least one camera. In another embodiment, the data processing device is programmed to determine, based upon an input signal received from a user input device, a contrast display setting for a screen background relative to at least one visual target, the contrast display setting enabling the user to gradually adapt to increasing levels of visual stimulation.Type: GrantFiled: May 23, 2022Date of Patent: August 1, 2023Assignee: Bertec CorporationInventors: Necip Berme, Mohan Chandra Baro, Cameron Scott Hobson
-
Patent number: 11703596Abstract: A method and system for automatically processing point cloud based on reinforcement learning are provided. The method for automatically processing point cloud based on reinforcement learning according to an embodiment of the present disclosure includes scanning to collect a point cloud (PCL) and an image through a lidar and a camera; calibrating, by a controller, to match locations of the image and the point cloud through reinforcement learning that maximizes a reward including geometric and luminous intensity consistency of the image and the point cloud; and meshing, by the controller, the point cloud into a 3D image through reinforcement learning that minimizes a reward including a difference between a shape of the image and a shape of the point cloud.Type: GrantFiled: March 30, 2021Date of Patent: July 18, 2023Assignee: HL KLEMOVE CORP.Inventor: Seongjoo Moon