Patents Examined by Frank S Chen
-
Patent number: 11790601Abstract: A minimal volumetric 3D transmission implementation enables efficient transmission of a 3D model to a client device. A volumetric 3D model is generated using a camera rig to capture frames of a subject. A viewer is able to select a view of the subject. A system determines an optimal subset of cameras of the camera rig to utilize to capture frames to generate the volumetric 3D model based on the viewer's selected view. The volumetric 3D model is transmitted to the user device. If the user changes the view, the process repeats, and a new subset of cameras are selected to generate the volumetric 3D model at a different angle.Type: GrantFiled: September 28, 2021Date of Patent: October 17, 2023Assignee: SONY GROUP CORPORATIONInventors: Nikolaos Georgis, Kiyoharu Sassa
-
Patent number: 11783533Abstract: In one embodiment, a method includes receiving a rendered image, motion vector data, and a depth map corresponding to a current frame of a video stream generated by an application, calculating a current three-dimensional position corresponding to the current frame of an object presented in the rendered image using the depth map, calculating a past three-dimensional position of the object corresponding to a past frame using the motion vector data and the depth map, estimating a future three-dimensional position of the object corresponding to a future frame based on the past three-dimensional position and the current three-dimensional position of the object, and generating an extrapolated image corresponding to the future frame by reprojecting the object presented in the rendered image to a future viewpoint associated with the future frame using the future three-dimensional position of the object.Type: GrantFiled: February 1, 2022Date of Patent: October 10, 2023Assignee: Meta Platforms Technologies, LLCInventors: Jian Zhang, Xiang Wei, David James Borel, Matthew Robert Fulghum, Neel Bedekar
-
Patent number: 11783552Abstract: In one implementation, a method of including a person in a CGR experience or excluding the person from the CGR experience is performed by a device including one or more processors, non-transitory memory, and a scene camera. The method includes, while presenting a CGR experience, capturing an image of scene; detecting, in the image of the scene, a person; and determining an identity of the person. The method includes determining, based on the identity of the person, whether to include the person in the CGR experience or exclude the person from the CGR experience. The method includes presenting the CGR experience based on the determination.Type: GrantFiled: December 21, 2021Date of Patent: October 10, 2023Assignee: APPLE INC.Inventors: Daniel Ulbricht, Amit Kumar K C, Angela Blechschmidt, Chen-Yu Lee, Eshan Verma, Mohammad Haris Baig, Tanmay Batra
-
Patent number: 11783409Abstract: Under an embodiment of the invention, an image capturing and processing system creates 3D image-based rendering (IBR) for real estate. The system provides image-based rendering of real property, the computer system including a user interface for visually presenting an image-based rendering of a real property to a user; and a processor to obtain two or more photorealistic viewpoints from ground truth image data capture locations; combine and process two or more instances of ground truth image data to create a plurality of synthesized viewpoints; and visually present a viewpoint in a virtual model of the real property on the user interface, the virtual model including photorealistic viewpoints and synthesized viewpoints.Type: GrantFiled: August 11, 2022Date of Patent: October 10, 2023Assignee: Appliance Computing III, Inc.Inventors: David Eraker, Aravind Kalaiah, Robert McGarty
-
Patent number: 11776222Abstract: A method includes: recording a series of frames; recording a set of motion data representing motion of the mobile device; detecting relative positions of a 3D constellation of objects based on the series of frames and the set of motion data; generating classifications of the 3D constellation of objects by calculating a classification of each object in a set of object classes; calculating a transform aligning the 3D constellation of objects with a 3D localization map; accessing a set of augmented reality assets defined by the 3D localization map; calculating a position of the mobile device relative to the 3D localization map based on the transform and the set of motion data; and rendering the set of augmented reality assets based on positions of the set of augmented reality assets in the 3D localization map and based on the position of the mobile device in the 3D localization map.Type: GrantFiled: December 20, 2021Date of Patent: October 3, 2023Assignee: Roblox CorporationInventors: Mark Stauber, Jaeyong Sung, Devin Haslam, Amichai Levy
-
Patent number: 11774254Abstract: A method is provided for determining angular relationships from a point of interest to a plurality of peripheral points of interest on a map. One or more cost functions from the point of interest to the plurality of the peripheral points of interest on the map are analyzed. A plurality of vectors emanating from the point of interest to the plurality of the peripheral points of interest on a different representation of the map are displayed. Another method is provided for identifying points of interest on a map. Regions of high interest are identified on the map. Regions of low interest are identified on the map. The regions of high interest are expanded on a different representation of the map. The regions of low interest are contracted by an amount proportional to an amount the regions of high interest are expanded on the different representation of the map.Type: GrantFiled: March 22, 2022Date of Patent: October 3, 2023Assignee: Palantir Technologies Inc.Inventor: Peter Wilczynski
-
Patent number: 11776201Abstract: Implementations described herein relate to methods, systems, and computer-readable media to relight a video. In some implementations, a computer-implemented method includes receiving a plurality of frames of a video. Each video frame includes depth data and color data for a plurality of pixels. The method further includes segmenting each frame based on the depth data to classify each pixel as a foreground pixel or a background pixel. The method further includes setting depth value of each background pixel to a fixed depth value and applying a Gaussian filter to smooth depth value for the plurality of pixels. The method further includes calculating surface normals based on the depth values of the plurality of pixels. The method further includes rendering a relighted frame by adding a virtual light based on the surface normals and the color data.Type: GrantFiled: April 18, 2022Date of Patent: October 3, 2023Assignee: Google LLCInventors: Guangyu Zhou, Dillon Cower
-
Patent number: 11776232Abstract: Certain aspects and features of this disclosure relate to virtual 3D pointing and manipulation. For example, video communication is established between a presenter client device and a viewer client device. A presenter video image is captured. A 3D image of a 3D object is rendered on the client devices and a presenter avatar is rendered on at least the viewer client device. The presenter avatar includes at least a portion of the presenter video image. When a positional input is detected at the presenter client device, the system renders, on the viewer client device, an articulated virtual appurtenance associated with the positional input, the 3D image, and the presenter avatar. A virtual interaction between the articulated virtual appurtenance and the 3D image appear to a viewer as naturally positioned for the interaction with respect to the viewer.Type: GrantFiled: February 8, 2022Date of Patent: October 3, 2023Assignee: Adobe Inc.Inventors: Kazi Rubaiat Habib, Tianyi Wang, Stephen DiVerdi, Li-Yi Wei
-
Patent number: 11769293Abstract: A camera motion estimation method for an augmented reality tracking algorithm according to an embodiment is a camera motion estimation method for an augmented reality tracking algorithm performed by a sequence production application executed by a processor of a computing device, which includes a step of displaying a target object on a sequence production interface, a step of setting a virtual camera trajectory on the displayed target object, a step of generating an image sequence by rendering images obtained when the target object is viewed along the set virtual camera trajectory, and a step of reproducing the generated image sequence.Type: GrantFiled: November 22, 2022Date of Patent: September 26, 2023Assignee: VIRNECT CO., LTD.Inventors: Ki Young Kim, Benjamin Holler
-
Patent number: 11763481Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program, and a method for performing operations comprising: receiving a video that depicts a person. The operations further include identifying a set of skeletal joints of the person. The operations further include identifying a pose of the person depicted in the video based on positioning of the set of skeletal joints (or detecting a hand pose, detecting a mirror frame, or detecting a mobile device). The operations further include determining, based on the pose of the person (or detecting a hand pose, detecting a mirror frame, or detecting a mobile device), that the video comprises a mirror reflection of the person. The operations further include, in response to determining that the video comprises the mirror reflection of the person, causing display of a 3D virtual object in the video.Type: GrantFiled: October 20, 2021Date of Patent: September 19, 2023Assignee: Snap Inc.Inventors: Matan Zohar, Yanli Zhao, Brian Fulkerson, Itamar Berger
-
Patent number: 11756260Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for imaging scenes of three-dimensional (3-D) environments using a virtual reality system. A request to display a target three-dimensional environment in a virtual reality system is received. An image of a scene of the target three-dimensional environment is generated. The image of the scene includes features distributed within the scene of the target three-dimensional environment according to a distance from a viewing point of the scene of the target three-dimensional environment. A request to display a modified image of the scene of the target three-dimensional environment is received. The modified image of the scene of the target three-dimensional environment is generated by adding auxiliary data associated with the plurality of features relative to the distance from the viewing point of the scene of the target three-dimensional environment to each of the plurality of features.Type: GrantFiled: January 27, 2022Date of Patent: September 12, 2023Inventors: Amanda Ponce, Enrique Moya
-
Patent number: 11748934Abstract: This application provides a three-dimensional (3D) expression base generation method performed by a computer device. The method includes: obtaining image pairs of a target object in n types of head postures, each image pair including a color feature image and a depth image in a head posture; constructing a 3D human face model of the target object according to then image pairs; and generating a set of expression bases of the target object according to the 3D human face model of the target object. According to this application, based on a reconstructed 3D human face model, a set of expression bases of a target object is further generated, so that more diversified product functions may be expanded based on the set of expression bases.Type: GrantFiled: October 15, 2021Date of Patent: September 5, 2023Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Xiangkai Lin, Linchao Bao
-
Patent number: 11748957Abstract: The subject technology receives image data and depth data. The subject technology selects an augmented reality content generator corresponding to a three-dimensional (3D) effect. The subject technology applies the 3D effect to the image data and the depth data based at least in part on the selected augmented reality content generator. The subject technology generates, using a processor, a message including information related to the applied 3D effect, the image data, and the depth data.Type: GrantFiled: November 12, 2021Date of Patent: September 5, 2023Assignee: Snap Inc.Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Dhritiman Sagar, Wentao Shang
-
Patent number: 11750786Abstract: A providing apparatus configured to provide three-dimensional geometric data to be used to generate a virtual viewpoint image receives a data request from a communication apparatus, decides which of a plurality of pieces of three-dimensional geometric data including first three-dimensional geometric data and second three-dimensional geometric data with a different quality than the first three-dimensional geometric data is to be provided to the communication apparatus from which the received data request was transmitted, and provides the three-dimensional geometric data decided on from among the plurality of pieces of three-dimensional geometric data, to the communication apparatus as a response to the received data request.Type: GrantFiled: November 2, 2021Date of Patent: September 5, 2023Assignee: CANON KABUSHIKI KAISHAInventor: Takashi Hanamoto
-
Patent number: 11741717Abstract: A data generator which achieves further improvement includes circuitry and memory connected to the circuitry. The circuitry, in operation: obtains sensing data from each of a plurality of moving bodies that includes a plurality of sensors, the sensing data being configured based on results of sensing by the plurality of sensors; and generates synthesized data by mapping the sensing data of the moving body into a virtual space, and when generating the synthesized data, determines a position of the sensing data to be mapped into the virtual space, based at least on a position of the moving body in a real space corresponding to the sensing data.Type: GrantFiled: December 18, 2020Date of Patent: August 29, 2023Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICAInventors: Takahiro Nishi, Tadamasa Toma, Toshiyasu Sugio
-
Patent number: 11741650Abstract: Dynamically customized animatable 3D models of virtual characters (“avatars”) in electronic messaging are provided. Users of instant messaging are represented dynamically by customized animatable 3D models of a corresponding virtual character. An example method comprises receiving input from a mobile device user, the input being an audio stream and/or an image/video stream; and based on an animatable 3D model and the streams, automatically generating a dynamically customized animatable 3D model corresponding to the user, including performing dynamic conversion of the input into an expression stream and corresponding time information. The example method includes generating a link to the expression stream and corresponding time information, for transmission in an instant message, and causing display of the customized animatable 3D model. Link generation and causing display is performed automatically or in response to user action.Type: GrantFiled: June 28, 2021Date of Patent: August 29, 2023Assignee: Didimo, Inc.Inventors: Verónica Costa Teixeira Pinto Orvalho, Hugo Miguel dos Reis Pereira, José Carlos Guedes dos Prazeres Miranda, Eva Margarida Ferreira de Abreu Almeida
-
Patent number: 11734468Abstract: Described in detail herein are systems and methods for generating computerized models of structures using geometry extraction and reconstruction techniques. The system includes a computing device coupled to a input device. The input device obtains raw data scanned by a sensor. The computing device is programmed to execute a data fusion process is applied to fuse the raw data, and a geometry extraction process is performed on the fused data to extract features such as walls, floors, ceilings, roof planes, etc. Large- and small-scale features of the structure are reconstructed using the extracted features. The large- and small-scale features are reconstructed by the system into a floor plan (contour) and/or a polyhedron corresponding to the structure. The system can also process exterior features of the structure to automatically identify condition and areas of roof damage.Type: GrantFiled: August 20, 2019Date of Patent: August 22, 2023Assignee: Xactware Solutions, Inc.Inventors: Jeffery D. Lewis, Jeffrey C. Taylor
-
Patent number: 11734875Abstract: An apparatus comprises a receiver (301) for receiving an image representation of a scene. A determiner (305) determines viewer poses for a viewer with respect to a viewer coordinate system. An aligner (307) aligns a scene coordinate system with the viewer coordinate system by aligning a scene reference position with a viewer reference position in the viewer coordinate system. A renderer (303) renders view images for different viewer poses in response to the image representation and the alignment of the scene coordinate system with the viewer coordinate system. An offset processor (309) determines the viewer reference position in response to an alignment viewer pose where the viewer reference position is dependent on an orientation of the alignment viewer pose and has an offset with respect to a viewer eye position for the alignment viewer pose. The offset includes an offset component in a direction opposite to a view direction of the viewer eye position.Type: GrantFiled: January 19, 2020Date of Patent: August 22, 2023Assignee: Koninklijke Philips N.V.Inventors: Fons Bruls, Christiaan Varekamp, Bart Kroon
-
Patent number: 11734881Abstract: A method and system for providing access to and control of parameters within a scenegraph includes redefining components or nodes' semantic within a scenegraph. The set of components or nodes (depending on the scenegraph structure) are required to enable access from the Application User Interface to selected scenegraph information. In one embodiment, a user interface is generated for controlling the scenegraph parameters. In addition, constraints can be implemented that allow or disallow access to certain scenegraph parameters and restrict their range of values.Type: GrantFiled: August 2, 2021Date of Patent: August 22, 2023Assignee: GRASS VALLEY CANADAInventors: Ralph Andrew Silberstein, David Sahuc, Donald Johnson Childers
-
Patent number: 11734927Abstract: In general, embodiments of the present invention provide methods, apparatuses, systems, computing devices, computing entities, and/or the like for performing mixed reality processing using at least one of depth-based partitioning of a point cloud capture data object, object-based partitioning of a point cloud capture data object, mapping a partitioned point cloud capture data object to detected objects of a three-dimensional scan data object, performing noise filtering on point cloud capture data objects based at least in part on geometric inferences from three-dimensional scan data objects, and performing geometrically-aware object detection using point cloud capture data objects based at least in part on geometric inferences from three-dimensional scan data objects.Type: GrantFiled: November 24, 2021Date of Patent: August 22, 2023Assignee: Optum Technology, Inc.Inventors: Yash Sharma, Vivek R. Dwivedi, Anshul Verma