Patents Examined by Diane M Wills
-
Patent number: 11941776Abstract: Disclosed is a system and associated methods for improving interactions with three-dimensional (“3D”) objects in a 3D space by dynamically defining the positioning of the handles used to control the interactions with the camera and/or the 3D objects. The system analyzes the positioning of different constructs that form the 3D objects. From the analysis, the system defines handles at different dynamically determined positions about the 3D objects. The system applies an edit from a first dynamically determined position about a particular 3D object in response to a user interaction with a first handle defined at the first dynamically determined position, and applies the edit from a second dynamically determined position about the particular 3D object in response to a user interaction with a second handle defined at the second dynamically determined position.Type: GrantFiled: March 30, 2023Date of Patent: March 26, 2024Assignee: Illuscio, Inc.Inventor: Max Good
-
Patent number: 11935198Abstract: A virtual mailbox is established for a first user based on a virtual mailbox definition specified by the first user. A message directed to the first user is received from a first device associated with a second user. A location within a real-world environment of the first user corresponding to the virtual mailbox is identified. A marker associated with the virtual mailbox is detected within the real-world environment of the first user. Based on detecting the marker, the second device presents the message overlaid on the real-world environment at the location corresponding to the virtual mailbox.Type: GrantFiled: June 29, 2021Date of Patent: March 19, 2024Assignee: Snap Inc.Inventors: Rajan Vaish, Yu Jiang Tham, Brian Anthony Smith, Sven Kratz, Karen Stolzenberg, David Meisenholder
-
Patent number: 11904182Abstract: A method is provided of producing an optical filter. The method comprises depositing a first mirror layer onto a substrate; depositing an insulating layer on the first mirror; exposing at least some of a plurality of portions of a surface of the insulating layer to a dose of energy; developing the insulating layer in order to remove a volume from the at least some of the plurality of portions of the insulating layer, wherein the volume of the insulating layer removed from each portion is related to the dose of energy exposed to each portion, and wherein a remaining thickness after the removal of the volume from each portion of the insulating layer is related to the dose of energy exposed to each portion. The method further comprising depositing a second mirror layer on the remaining thickness of each of the plurality of portions of the insulating layer.Type: GrantFiled: December 10, 2020Date of Patent: February 20, 2024Assignee: CHANGZHOU NO. 2 PEOPLE'S HOSPITALInventors: Xinye Ni, Chunying Li, Zhengda Lu, Kai Xie, Lei Li, Zhiyin Zhu, Tao Wang
-
Patent number: 11900528Abstract: A method of rendering a view is disclosed. Three occlusion planes associated with an interior cavity of a three-dimensional object included in the view are created. The three occlusion planes are positioned based on a camera position and orientation. Any objects or parts of objects that are in a line of sight between the camera and any one of the three occlusion planes are culled. The view is rendered from the perspective of the camera.Type: GrantFiled: May 27, 2021Date of Patent: February 13, 2024Assignee: Unity IPR ApSInventors: Andrew Peter Maneri, Donnavon Troy Webb, Jonathan Randall Newberry
-
Patent number: 11902694Abstract: A method and an apparatus for making a ghosting special effect for a movie, an electronic device, and a medium, related to the field of computer application. The method comprises converting frames at specified moments in a to-be-processed movie into to-be-processed images; performing background removing operations on the to-be-processed images to obtain target object images corresponding to the specified moments in the to-be-processed movie; integrating the target object images into the to-be-processed movie, wherein each of the target object images is placed on its original position in the to-be-processed movie; and setting a continuous display period for each of the target object images in the to-be-processed movie.Type: GrantFiled: May 5, 2019Date of Patent: February 13, 2024Assignee: JUPITER PALACE PTE. LTD.Inventors: Jiahong Gao, Sha Cao
-
Patent number: 11893681Abstract: Provided is a two-dimensional (2D) image processing method including obtaining a 2D image, processing the obtained 2D image by using a trained convolutional neural network (CNN) to obtain at least one camera parameter and at least one face model parameter from the 2D image, and generating a three-dimensional (3D) face model, based on the obtained at least one camera parameter and at least one face model parameter.Type: GrantFiled: December 6, 2019Date of Patent: February 6, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Ivan Viktorovich Glazistov, Ivan Olegovich Karacharov, Andrey Yurievich Shcherbinin, Ilya Vasilievich Kurilin
-
Patent number: 11887257Abstract: A method and an apparatus for virtual training based on tangible interaction are provided. The apparatus acquires data for virtual training, and acquires a three-dimensional position of a real object based on a depth image and color image of the real object and infrared (IR) data included in the obtained data. Then, virtualization of an overall appearance of a user is performed by extracting a depth from depth information on a user image included in the obtained data and matching the extracted depth with the color information, and depth data and color data for the user obtained according to virtualization of the user is visualized in virtual training content. In addition, the apparatus performs correction on joint information using the joint information and the depth information included in the obtained data, estimates a posture of the user using the corrected joint information, and estimates a posture of a training tool using the depth information and IR data included in the obtained data.Type: GrantFiled: November 17, 2021Date of Patent: January 30, 2024Assignee: Electronics and Telecommunications Research InstituteInventors: Seong Min Baek, Youn-Hee Gil, Cho-Rong Yu, Hee Sook Shin, Sungjin Hong
-
Patent number: 11854231Abstract: Determining the position and orientation (or “pose”) of an augmented reality device includes capturing an image of a scene having a number of features and extracting descriptors of features of the scene represented in the image. The descriptors are matched to landmarks in a 3D model of the scene to generate sets of matches between the descriptors and the landmarks. Estimated poses are determined from at least some of the sets of matches between the descriptors and the landmarks. Estimated poses having deviations from an observed location measurement that are greater than a threshold value may be eliminated. Features used in the determination of estimated poses may also be weighted by the inverse of the distance between the feature and the device, so that closer features are accorded more weight.Type: GrantFiled: January 31, 2022Date of Patent: December 26, 2023Assignee: Snap Inc.Inventors: Maria Jose Garcia Sopo, Qi Pan, Edward James Rosten
-
Patent number: 11847869Abstract: A concurrent simulation of multiple sensor modalities includes identifying the multiple sensor modalities in association with a simulation scenario, determining a timeline interleaving a publishing or operating frequency of each of the multiple sensor modalities relative to each other, determining a current time interval of a sliding window in the timeline, determining a simulation segment of the simulation scenario using the current time interval of the sliding window, rendering a scene based on the simulation segment, executing a simulation to concurrently simulate the multiple sensor modalities using ray tracing in the rendered scene, and generating simulated sensor data of the multiple sensor modalities based on executing the simulation.Type: GrantFiled: February 22, 2023Date of Patent: December 19, 2023Assignee: Aurora Operations, IncInventors: Ivona Andrzejewski, Steven Capell, Ryusuke Villemin, Carl Magnus Wrenninge
-
Patent number: 11842438Abstract: A method and a terminal device for determining an occluded area of a virtual object, and a computer-readable non-transitory storage medium. The method comprises: constructing a scene three-dimensional map of a current frame according to features points of the current frame and corresponding depth information thereof, and constructing a three-dimensional scene model according to the features points of the current frame; displaying a designated virtual object at a location corresponding to a click operation, in response to detecting the click operation of a user on the scene three-dimensional map; comparing depth values of the three-dimensional scene model with depth values of a model of the virtual object; determining the occluded area of the virtual object in the current frame according to a comparing result; determining another occluded area of the virtual object in a next frame according to the occluded area of the virtual object in the current frame.Type: GrantFiled: October 12, 2021Date of Patent: December 12, 2023Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.Inventor: Yulu Wang
-
Patent number: 11836868Abstract: An augmented reality system includes a first scene and a second scene. Further, the augmented reality system includes a partially reflective surface positioned relative to the first scene, the second scene, and an audience to facilitate viewing of one of the first scene or the second scene through the partially reflective surface and to facilitate reflection of the other of the first scene or the second scene toward the audience as augmented reality imagery. A sensor of the augmented reality system is designed to detect and generate data indicative of a characteristic of the first scene, and a correlative effect system is operable to receive the data and adjust the second scene based thereon.Type: GrantFiled: September 29, 2021Date of Patent: December 5, 2023Assignee: UNIVERSAL CITY STUDIOS LLCInventor: Akiva Meir Krauthamer
-
Patent number: 11830019Abstract: This disclosure provides methods for generating a vehicle wrap design. The method includes: obtaining customer information corresponding to an entity; generating, using the computing device, a vehicle wrap design for covering a vehicle based on the obtained customer information; generating, using the computing device, a three-dimensional rendering of the vehicle, wherein the vehicle wrap design is applied to the three-dimensional rendering of the vehicle; and causing a client device to display the three-dimensional rendering with the applied vehicle wrap.Type: GrantFiled: May 2, 2022Date of Patent: November 28, 2023Assignee: WRAPMATE INC.Inventors: Christopher Loar, Jacob A. Lozow
-
Patent number: 11830150Abstract: The following relates generally to light detection and ranging (LIDAR) and artificial intelligence (AI). In some embodiments, a system: receives light detection and ranging (LIDAR) data generated from a LIDAR camera; receives preexisting utility line data; and determines a location of the utility line based upon: (i) the received LIDAR data, and (ii) the received preexisting utility line data.Type: GrantFiled: April 26, 2021Date of Patent: November 28, 2023Assignee: STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANYInventors: Nicholas Carmelo Marotta, Laura Kennedy, JD Johnson Willingham
-
Patent number: 11816850Abstract: This disclosure relates to reconstructing three-dimensional models of objects from two-dimensional images. According to a first aspect, this specification describes a computer implemented method for creating a three-dimensional reconstruction from a two-dimensional image, the method comprising: receiving a two-dimensional image; identifying an object in the image to be reconstructed and identifying a type of said object; spatially anchoring a pre-determined set of object landmarks within the image; extracting a two-dimensional image representation from each object landmark; estimating a respective three-dimensional representation for the respective two-dimensional image representations; and combining the respective three-dimensional representations resulting in a fused three-dimensional representation of the object.Type: GrantFiled: April 19, 2021Date of Patent: November 14, 2023Assignee: Snap Inc.Inventors: Riza Alp Guler, Iason Kokkinos
-
Patent number: 11804014Abstract: In some implementations, representations of applications are identified, positioned, and configured in a computer generated reality (CGR) environment based on context. The location at which the representation of the application is positioned may be based on the context of the CGR environment. The context may be determined based on non-image data that is separate from image data of the physical environment being captured for the CGR environment. As examples, the non-image data may relate to the user, a user preferences, a user attribute, a user gesture, motion, activity, or interaction, semantics related to user input or an external source of information, the current time, date, or time period, information from another device involved in the CGR, etc.Type: GrantFiled: April 10, 2020Date of Patent: October 31, 2023Assignee: Apple Inc.Inventors: Daniele Casaburo, Anselm Grundhoefer, Eshan Verma, Omar Elafifi, Pedro Da Silva Quelhas
-
Patent number: 11783495Abstract: An apparatus for calculating torque and force about body joints to predict muscle fatigue includes a processor configured to receive image frames depicting a subject. The processor is configured to execute at least one machine learning model using the image frames as an input, to generate a 2D representation of the subject, a subject mass value for the subject based on the 2D representation, and a 3D representation of the subject based on the 2D representation, where the 3D representation includes a temporal joints profile. The processor is further configured to compute each torque value for each joint of the subject from the 3D representation, based on the subject mass value. The processor is further configured to generate a muscle fatigue prediction for each joint of the subject, based on a set of torque values and a torque threshold.Type: GrantFiled: October 25, 2022Date of Patent: October 10, 2023Assignee: INSEER Inc.Inventors: Mitchell Messmore, Alec Diaz-Arias, John Rachid, Dmitriy Shin, Jean E. Robillard
-
Patent number: 11776086Abstract: Systems and methods provide a provisioning framework for a distributed graphics processing unit (GPU) service. A network device in a network receives, from an application, a service request for multi-access edge compute (MEC)-based virtual graphic processing unit (vGPU) services. The network device receives real-time utilization data from multiple MEC clusters in different MEC network locations and generates a utilization view of the multiple MEC clusters in the different MEC network locations. The network device selects, based on the real-time utilization view, one of the different MEC network locations to provide the vGPU services and instructs a of the multiple MEC clusters in the one of the different MEC network locations to perform container provisioning and service provisioning for the vGPU services.Type: GrantFiled: September 20, 2021Date of Patent: October 3, 2023Assignee: Verizon Patent and Licensing Inc.Inventors: Mohammad Raheel Khalid, William Patrick Dildine, Richard Christopher Lamb, Vinaya Kumar Polavarapu, Paul Duree
-
Patent number: 11778162Abstract: This disclosure relates generally to method and system for draping a 3D garment on a 3D human body. Dressing digital humans in 3D have gained much attention due to its use in online shopping and draping 3D garments over the 3D human body has immense applications in virtual try-on, animations, and accurate fitment of the 3D garment is the utmost importance. The proposed disclosure is a single unified garment deformation model that learns the shared space of variations for a body shape, a body pose, and a styling garment. The method receives a plurality of human body inputs to construct a 3D skinned garments for the subject. The deep draper network trained using a plurality of losses provides efficient deep neural network based method that predicts fast and accurate 3D garment images. The method couples the geometric and multi-view perceptual constraints that efficiently learn the garment deformation's high-frequency geometry.Type: GrantFiled: December 29, 2021Date of Patent: October 3, 2023Assignee: TATA CONSULTANCY SERVICES LIMITEDInventors: Lokender Tiwari, Brojeshwar Bhowmick
-
Patent number: 11769263Abstract: Systems and techniques are provided for registering three-dimensional (3D) images to deformable models.Type: GrantFiled: January 7, 2021Date of Patent: September 26, 2023Assignee: QUALCOMM IncorporatedInventors: Samuel Sunarjo, Gokce Dane, Ashar Ali, Upal Mahbub
-
Patent number: 11741670Abstract: A depth image is used to obtain a three dimensional (3D) geometry of an object as an object mesh. The object mesh is obtained using an object shell representation. The object shell representation is based on a series of depth images denoting the entry and exit points on the object surface that camera rays would pass through. Given a set of entry points in the form of a masked depth image of an object, an object shell (an entry image and an exit image) is generated. Since entry and exit images contain neighborhood information given by pixel adjacency, the entry and exit images provide partial meshes of the object which are stitched together in linear time using the contours of the entry and exit images. A complete object mesh is provided in the camera coordinate frame.Type: GrantFiled: September 13, 2021Date of Patent: August 29, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Nikhil Narsingh Chavan Dafle, Sergiy Popovych, Ibrahim Volkan Isler, Daniel Dongyuel Lee