Patents Examined by Sing-Wai Wu
-
Patent number: 11620786Abstract: Disclosed is a hybrid approach to rendering transparent or translucent objects, which combines object-space ray tracing with texture-space parametrization and integration. Transparent or translucent objects are first parameterized using two textures: (1) a texture that stores the surface normal at each location on the transparent or translucent object, and (2) a texture that stores the world space coordinates at each location on the transparent or translucent object. Ray tracing can then be used to streamline and unify the computation of light transport inside thick mediums, such as transparent or translucent objects, with the rest of the scene. For each valid (e.g., visible) location on the surface of a transparent or translucent object, the disclosed embodiments trace one or more rays through such objects and compute the resulting lighting in an order-independent fashion. The results are stored in a texture, which is then applied during the final lighting stage.Type: GrantFiled: March 5, 2021Date of Patent: April 4, 2023Assignee: Electronic Arts Inc.Inventor: Colin Barré-Brisebois
-
Patent number: 11610357Abstract: A system of generating targeted user lists using customizable avatar characteristics includes a messaging server system. The messaging server system includes an application server that generates a database of mutable avatar characteristics, a database of style categories, and a targeted user list. The application server then causes a media content item to be displayed on display screens of electronic devices associated with the set of user identifiers included in the targeted user list. Other embodiments are disclosed.Type: GrantFiled: November 5, 2021Date of Patent: March 21, 2023Assignee: Snap Inc.Inventors: Mehrdad Jahangiri, Andrew Maxwell, Athena Rich
-
Patent number: 11606531Abstract: A method and an apparatus for image capturing, and a storage medium can be applied to a terminal device. When it is determined that the terminal is in an augmented reality image capturing scene, augmented reality environment information of the augmented reality image capturing scene is determined. When an image capturing instruction is received, an image is captured and augmented reality environment information corresponding to the image is recorded to obtain a target image carrying the augmented reality environment information.Type: GrantFiled: July 8, 2020Date of Patent: March 14, 2023Assignee: BEIJING XIAOMI MOBILE SOFTWARE CO., LTD.Inventors: Xiaojun Wu, Daming Xing
-
Patent number: 11594022Abstract: Aspects of the invention include generating a combined raster image from point cloud data and reference data describing an original location of a power line. Selecting a set of candidate pixels from the combined raster image describing an updated location of a power line, wherein the selection is based at least in part on a location of pixels in the combined raster image that describe the original location. Detecting pixels from the set of candidate pixels that describe an updated location of a power line. Modifying the combined raster image to reflect the updated location of the power line.Type: GrantFiled: November 16, 2020Date of Patent: February 28, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Linsong Chu, Mudhakar Srivatsa, Raghu Kiran Ganti
-
Patent number: 11592691Abstract: Systems and methods are disclosed for generating a 3D computer model of an eyewear product, using a computer system, the method including obtaining an inventory comprising a plurality of product frames; scanning a user's anatomy; extracting measurements of the user's anatomy; obtaining a first model of a contour and/or surface of the user's anatomy, based on the extracted measurements of the user's anatomy; identifying, based on the contour and/or the surface of the user's anatomy, a first product frame among the plurality of product frames; determining adjustments to the first product frame based on the contour and/or the surface of the user's anatomy; generating a second model rendering comprising the adjusted first product frame matching the contours and/or the surface of the user's anatomy.Type: GrantFiled: May 16, 2022Date of Patent: February 28, 2023Assignee: Bespoke, Inc.Inventors: Eric J. Varady, Robert Varady, Wyatt Eberspacher
-
Patent number: 11576725Abstract: A system for creating at least one model of a bone and implanted implant comprises a processing unit; and a non-transitory computer-readable memory communicatively coupled to the processing unit and comprising computer-readable program instructions executable by the processing unit for: obtaining at least one image of at least part of a bone and of an implanted implant on the bone, the at least one image being patient specific, obtaining a virtual model of the implanted implant using an identity of the implanted implant, overlaying the virtual model of the implanted implant on the at least one image to determine a relative orientation of the implanted implant relative to the bone in the at least one image, and generating and outputting a current bone and implant model using the at least one image, the virtual model of the implanted implant and the overlaying.Type: GrantFiled: December 12, 2018Date of Patent: February 14, 2023Assignee: ORTHOSOFT ULCInventors: Ramnada Chav, Jean-Sebastien Merette, Tin Nguyen, Karine Duval, Pierre Couture
-
Patent number: 11568604Abstract: Methods and systems for presenting an object on a screen of a head mounted display (HMD) include receiving an image of a real-world environment in proximity of a user wearing the HMD. The image is received from one or more forward facing cameras of the HMD and processed for rendering on a screen of the HMD by a processor within the HMD. A gaze direction of the user wearing the HMD, is detected using one or more gaze detecting cameras of the HMD that are directed toward one or each eye of the user. Images captured by the forward facing cameras are analyzed to identify an object captured in the real-world environment that is in line with the gaze direction of the user, wherein the image of the object is rendered at a first virtual distance that causes the object to appear out-of-focus when presented to the user. A signal is generated to adjust a zoom factor for lens of the one or more forward facing cameras so as to cause the object to be brought into focus.Type: GrantFiled: August 14, 2019Date of Patent: January 31, 2023Assignee: Sony Interactive Entertainment Inc.Inventors: Jeffrey Roger Stafford, Michael Taylor
-
Patent number: 11562524Abstract: An example control system includes a memory and at least one processor to obtain image data from a given region and perform image analysis on the image data to detect a set of objects in the given region. For each object of the set, the example control system may classify each object as being one of multiple predefined classifications of object permanency, including (i) a fixed classification, (ii) a static and fixed classification, and/or (iii) a dynamic classification. The control system may generate at least a first layer of a occupancy map for the given region that depicts each detected object that is of the static and fixed classification and excluding each detected object that is either of the static and unfixed classification or of the dynamic classification.Type: GrantFiled: October 31, 2017Date of Patent: January 24, 2023Assignee: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.Inventors: Jonathan Salfity, David Murphy, Will Allen
-
Patent number: 11562532Abstract: A site specifying device, includes a memory; and a processor coupled to the memory and the processor configured to: store three-dimensional model data indicating a three-dimensional model of an object, display the three-dimensional model based on the three-dimensional model data, and select from the three-dimensional model a site in a range of a depth specified toward an inner side of the three-dimensional model from a region surrounded by a closed curve on a surface of the three-dimensional model according to an input of the closed curve to the surface of the displayed three-dimensional model and an input to specify the depth from the surface of the three-dimensional model.Type: GrantFiled: April 16, 2020Date of Patent: January 24, 2023Assignee: FUJITSU LIMITEDInventors: Daisuke Kushibe, Masahiro Watanabe
-
Patent number: 11557083Abstract: The present disclosure discloses a photography-based 3D modeling system and method, and an automatic 3D modeling apparatus and method, including: (S1) attaching a mobile device and a camera to the same camera stand; (S2) obtaining multiple images used for positioning from the camera or the mobile device during movement of the stand, and obtaining a position and a direction of each photo capture point, to build a tracking map that uses a global coordinate system; (S3) generating 3D models on the mobile device or a remote server based on an image used for 3D modeling at each photo capture point; and (S4) placing the individual 3D models of all photo capture points in the global three-dimensional coordinate system based on the position and the direction obtained in S2, and connecting the individual 3D models of multiple photo capture points to generate an overall 3D model that includes multiple photo capture points.Type: GrantFiled: August 20, 2020Date of Patent: January 17, 2023Assignee: Shanghai Yiwo Information Technology Co., LTD.Inventors: Ming Zhao, Zhongzheng Xiang, Pei Cai
-
Patent number: 11541903Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using a surfel map to generate a prediction for a state of an environment. One of the methods includes obtaining surfel data comprising a plurality of surfels, wherein each surfel corresponds to a respective different location in an environment, and each surfel has associated data that comprises an uncertainty measure; obtaining sensor data for one or more locations in the environment, the sensor data having been captured by one or more sensors of a first vehicle; determining one or more particular surfels corresponding to respective locations of the obtained sensor data; and combining the surfel data and the sensor data to generate a respective object prediction for each of the one or more locations of the obtained sensor data.Type: GrantFiled: June 3, 2020Date of Patent: January 3, 2023Assignee: Waymo LLCInventors: Carlos Hernandez Esteban, Michael Montemerlo, Peter Pawlowski, David Yonchar Margines
-
Patent number: 11544845Abstract: A digital imaging method of analyzing pixel data of an image of a user's body for determining a user-specific trapped hair value before removing hair is disclosed. The digital imaging method includes: aggregating training images of respective individuals' bodies before removing hair; training, using the pixel data of the training images, a trapped hair model to determine trapped hair values associated with a trapped hair scale ranging from least trapped hair to most trapped hair; receiving an image of a user's body before removing hair; analyzing the user image using the trapped hair model to determine a user-specific trapped hair value; generating a user-specific electronic recommendation designed to address a feature identifiable within the pixel data of the user's body based on the user-specific trapped hair value; and rendering the recommendation on a display screen of a user computing device.Type: GrantFiled: July 2, 2020Date of Patent: January 3, 2023Assignee: The Gillette Company LLCInventors: Susan Clare Robinson, Leigh Knight, Tina Susannah Clarelli
-
Patent number: 11538221Abstract: A method to process tiles of a screen space includes determining a tile-processing order for tiles of a first batch of primitives based on a tile-processing order for a second batch of primitives in which the second batch of primitives are processed prior to the first batch of primitives. The tiles of the first batch of primitives are processed based on the tile-processing order determined for the tiles of the first batch of primitives. The tile-processing order is updated as tiles of the first batch of primitives are pushed to a backend processing portion of a graphics processing unit. In one embodiment, determining the tile-processing order for the tiles of the first batch of primitives include arranging the tiles of the first batch of primitives that have a same screen-space as tiles of the second batch of primitives based on a most-recently-processed-tile-to-a-least-recently-processed tile order of the second batch of primitives.Type: GrantFiled: November 16, 2020Date of Patent: December 27, 2022Inventors: Sushant Kondguli, Nilanjan Goswami
-
Patent number: 11538213Abstract: Systems and methods create and distribute addressable virtual content with interactivity. The virtual content may depict a live event and may be customized for each individual user based on dynamic characteristics (e.g., habits, preferences, etc.) of the user that are captured during user interaction with the virtual content. The virtual content is generated with low latency between the actual event and the live content that allows the user to interactively participate in actions related to the live event. The virtual content may represent a studio with multiple display screens that each show different live content (of the same or different live events), and may also include graphic displays that include related data such as statistics corresponding to the live event, athletes at the event, and so on. The content of the display screens and graphics may be automatically selected based on the dynamic characteristics of the user.Type: GrantFiled: November 5, 2020Date of Patent: December 27, 2022Assignee: LIVE CGI, INC.Inventor: Marc Rowley
-
Patent number: 11532119Abstract: Embodiments of the present disclosure are directed to methods and computer systems for converting datasets into three-dimensional (“3D”) mesh surface visualization, displaying the mesh surface on a computer display, comparing two three-dimensional mesh surface structures by blending two primary different primary colors to create a secondary color, and computing the distance between two three-dimensional mesh surface structures converted from two closely-matched datasets. For qualitative analysis, the system includes a three-dimensional structure comparison control engine that is configured to convert dataset with three-dimensional structure into three-dimensional surfaces with mesh surface visualization. The control engine is also configured to assign color and translucency value to the three-dimensional surface for the user to do qualitative comparison analysis. For quantitative analysis, the control engine is configured to compute the distance field between two closely-matched datasets.Type: GrantFiled: January 20, 2021Date of Patent: December 20, 2022Assignee: VARIAN MEDICAL SYSTEMS INTERNATIONAL AGInventors: Janos Zatonyi, Marcin Novotni, Patrik Kunz
-
Patent number: 11521306Abstract: An image processing apparatus comprises a changing unit configured to change a display area of an image from a first display area to a second display area including at least a portion of the first display area, an acquiring unit configured to acquire a first value indicating luminance, in which brightness contrast is considered, in an image displayed in the first display area and a second value indicating luminance, in which brightness contrast is considered, in an image displayed in the second display area, and a correcting unit configured to correct luminance of the image displayed in the second display area based on the first value and the second value that are acquired by the acquiring unit.Type: GrantFiled: January 24, 2020Date of Patent: December 6, 2022Assignee: CANON KABUSHIKI KAISHAInventor: Hiroaki Nashizawa
-
Patent number: 11514654Abstract: Methods and systems are presented for determining a virtual focus model for a camera apparatus, the camera apparatus comprising one or more image capture elements and one or more optics device through which light in an optical path passes from a stage environment to at least one of the one or more image capture elements, the stage environment including virtual scene display for displaying a virtual scene.Type: GrantFiled: December 9, 2021Date of Patent: November 29, 2022Assignee: Unity Technologies SFInventors: Kimball D. Thurston, III, Joseph W. Marks, Luca Fascione, Millicent Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller, Peter M. Hillman
-
Patent number: 11508131Abstract: A system, method or compute program product for generating composite images. One of the systems includes a capture device to capture an image of a physical environment; and one or more storage devices storing instructions that are operable, when executed by one or more processors of the system, to cause the one or more processors to: obtain an image of the physical environment as captured by the capture device, identify a visually-demarked region on a surface in the physical environment as depicted in the image, process the image to generate a composite image of the physical environment that includes a depiction of a virtual object, wherein a location of the depiction of the virtual object in the composite image is based on a location of the depiction of the visually-demarked region in the image, and cause the composite image to be displayed for a user.Type: GrantFiled: November 6, 2020Date of Patent: November 22, 2022Assignee: Tanzle, Inc.Inventors: Nancy L. Clemens, Michael A. Vesely
-
Patent number: 11501507Abstract: A method of motion compensation for geometry representation of 3D data is described herein. The method performs motion compensation by first identifying correspondent 3D surfaces in time domain, then followed by a 3D to 2D projection of motion compensated 3D surface patches, and then finally performing 2D motion compensation on the projected 3D surface patches.Type: GrantFiled: December 27, 2018Date of Patent: November 15, 2022Assignee: SONY GROUP CORPORATIONInventor: Danillo Graziosi
-
Patent number: 11490964Abstract: The disclosure relates to a method and also to a correspondingly configured imaging device for planning support for an interventional procedure. In the method, a model of a hollow organ is created from a 3D image dataset. A deformation of the hollow organ is then simulated based on a course of a guide facility in the hollow organ through a deformation of the model. In accordance with the deformed model, a spatially resolved compression and/or stretching of the hollow organ, which is brought about by an introduction of the guide facility, is determined and specified.Type: GrantFiled: December 10, 2019Date of Patent: November 8, 2022Assignee: Siemens Healthcare GmbHInventors: Katharina Breininger, Marcus Pfister