Patents Issued in March 31, 2022
-
Publication number: 20220101572Abstract: Described herein is a system for 3D proton imaging encompassing both proton radiography (pRad) and proton CT (pCT). The disclosed system can reduce range uncertainties, while providing a fast and efficient check of patient set up and integrated range along a beam's eye view just before treatment. The disclosed system provides a complete solution to the range inaccuracy problem in proton therapy, substantially reducing the uncertainties of treatment planning by directly measuring relative stopping power without being affected by image artifacts and with much lower dose to the patient than comparable x-ray images. Also described herein is a proton imaging algorithm for prompt iterative 3D pCT image reconstruction, where each iteration is fast and efficient, and the number of iterations is minimized. The method offers a unique solution that optimally fits different protons and does not depend on the starting point for the first iteration.Type: ApplicationFiled: September 27, 2021Publication date: March 31, 2022Inventors: Don Frederic DEJONGH, Ethan Alan DEJONGH
-
Publication number: 20220101573Abstract: A method for generating result slice images with at least partially different slice thickness based on a tomosynthesis image data set of a breast includes generating average value slices and maximum value slices (MIP) based on the tomosynthesis image data set, frequency dividing the average value slices into low-pass filtered and high-pass filtered average value slices, high-pass filtering of maximum value slices to form high-pass filtered maximum value slices, mixing high-pass filtered maximum value slices and high-pass filtered average value slices to form mixed high-pass filtered maximum value slices, combining the low-pass filtered average value slices with the mixed high-pass filtered maximum value slices to form the result slice images, and applying a moving maximum value across a selected thickness of maximum value slices or across a selected thickness of mixed high-pass filtered maximum value slices.Type: ApplicationFiled: September 28, 2021Publication date: March 31, 2022Applicant: Siemens Healthcare GmbHInventors: Marcel BEISTER, Ludwig RITSCHL, Steffen KAPPLER, Mathias HOERNIG
-
Publication number: 20220101574Abstract: A method is disclosed for generating an image. An embodiment of the method includes detecting a first projection data set via a first group of detector units, the first group including a first plurality of first detector units, each having more than a given number of detector elements; detecting a second projection data set via a second group of detector units, the second group including a second plurality of second detector units, each including, at most, the given number of detector elements; reconstructing first image data based on the first projection data set; reconstructing second image data based on the second projection data set; and combining the first image data and the second image data. A non-transitory computer readable medium, a data processing unit, and an imaging device including the data processing unit are also disclosed.Type: ApplicationFiled: December 8, 2021Publication date: March 31, 2022Applicant: Siemens Healthcare GmbHInventors: Thomas FLOHR, Steffen KAPPLER
-
Publication number: 20220101575Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for removing an anchor point from a Bezier spline while preserving the shape of the Bezier spline. For example, the disclosed systems can replace adjacent input segments of an initial Bezier spline that are connected at an anchor point with a new contiguous segment that does not include an anchor point and that spans the portion of the spline covered by the adjacent segments. The disclosed systems can utilize an objective function to determine tangent vectors that indicate locations of control points for generating the new segment to replace the adjacent segments. In addition, the disclosed systems can generate a modified Bezier spline that includes the new segment in place of the adjacent segments of the initial Bezier spline.Type: ApplicationFiled: December 8, 2021Publication date: March 31, 2022Inventors: Ankit Phogat, Vineet Batra, Daniel Kaufman
-
Publication number: 20220101576Abstract: Various methods and systems are provided for translating magnetic resonance (MR) images to pseudo computed tomography (CT) images. In one embodiment, a method comprises acquiring an MR image, generating, with a multi-task neural network, a pseudo CT image corresponding to the MR image, and outputting the MR image and the pseudo CT image. In this way, the benefits of CT imaging with respect to accurate density information, especially in sparse regions of bone which exhibit with high dynamic range, may be obtained in an MR-only workflow, thereby achieving the benefits of enhanced soft-tissue contrast in MR images while eliminating CT dose exposure for a patient.Type: ApplicationFiled: September 25, 2020Publication date: March 31, 2022Inventors: Sandeep Kaushik, Dattesh Shanbhag, Cristina Cozzini, Florian Wiesinger
-
Publication number: 20220101577Abstract: The disclosure describes one or more embodiments of systems, methods, and non-transitory computer-readable media that generate a transferred hairstyle image that depicts a person from a source image having a hairstyle from a target image. For example, the disclosed systems utilize a face-generative neural network to project the source and target images into latent vectors. In addition, in some embodiments, the disclosed systems quantify (or identify) activation values that control hair features for the projected latent vectors of the target and source image. Furthermore, in some instances, the disclosed systems selectively combine (e.g., via splicing) the projected latent vectors of the target and source image to generate a hairstyle-transfer latent vector by using the quantified activation values.Type: ApplicationFiled: September 28, 2020Publication date: March 31, 2022Inventors: Saikat Chakrabarty, Sunil Kumar
-
Publication number: 20220101578Abstract: Methods, systems, and non-transitory computer readable media are disclosed for generating a composite image comprising objects in positions from two or more different digital images. In one or more embodiments, the disclosed system receives a sequence of images and identifies objects within the sequence of images. In one example, the disclosed system determines a target position for a first object based on detecting user selection of the first object in the target position from a first image. The disclosed system can generate a fixed object image comprising the first object in the target position. The disclosed system can generate preview images comprising the fixed object image with the second object sequencing through a plurality of positions as seen in the sequence of images. Based on a second user selection of a desired preview image, the disclosed system can generate the composite image.Type: ApplicationFiled: September 30, 2020Publication date: March 31, 2022Inventors: Ajay Bedi, Ajay Jain, Jingwan Lu, Anugrah Prakash, Prasenjit Mondal, Sachin Soni, Sanjeev Tagra
-
Publication number: 20220101579Abstract: A method for displaying a passive cavitation image that shows characteristic information of a passive cavitation includes: receiving an ultrasound signal caused by the passive cavitation; generating a plurality of first passive cavitation images for the passive cavitation at predetermined respective time frame using the received ultrasound signal by a DAS beam forming; generating a plurality of second passive cavitation images in which a maximum magnitude signal region is displayed by selecting a main lobe region having a magnitude greater than or equal to a predetermined value in the respective first passive cavitation image; generating a main lobe passive cavitation image in which a main region is displayed in the respective time frame by superimposing the plurality of the second passive cavitation images obtained for the respective time frame; and generating a passive cavitation image by displaying the main lobe passive cavitation image on a background image.Type: ApplicationFiled: January 3, 2020Publication date: March 31, 2022Inventor: Min Joo CHOI
-
Publication number: 20220101580Abstract: A server capable of adjusting a display format of an electronic comic according to a user's preference and a usage pattern includes an identification unit configured to identify each of a plurality of frames based on a comic including the plurality of frames, a determination unit configured to determine an order of the plurality of frames identified by the identification unit, and an organization unit configured to organize an arrangement of the frames based on the order determined by the determination unit. The organization unit may organize an arrangement of the frames based on an instruction signal configured to instruct a display format.Type: ApplicationFiled: December 10, 2021Publication date: March 31, 2022Inventor: Hiroyuki TAKAHASHI
-
Publication number: 20220101581Abstract: Provided is an information processing system including: a comparison information acquisition unit that acquires comparison information regarding iris comparison generated based on an iris image including an iris of a recognition subject; and a display image generation unit that generates a display image including an image indicating a content of the comparison information in association with positions in the iris.Type: ApplicationFiled: December 13, 2021Publication date: March 31, 2022Applicant: NEC CorporationInventor: Mamoru INOUE
-
Publication number: 20220101582Abstract: A content replacement system and method for simultaneously updating a plurality of images of visual designs on an electronic display of an electronic device using synchronized client- and server-side visual design object models by representing visual objects in visual designs using a keyed attribute and associated attribute value comprising a visual object specification.Type: ApplicationFiled: December 13, 2021Publication date: March 31, 2022Inventors: Alex Uzgin, Donald J. Naylor, Jarongorn Manny Lertpatthanakul, Jeremy Pallai, Jonathan Gaudette, Rebecca Safran, Ramon Harrington
-
Publication number: 20220101583Abstract: Provided is an information processing system including: a comparison information acquisition unit that acquires comparison information regarding iris comparison generated based on an iris image including an iris of a recognition subject; and a display image generation unit that generates a display image including an image indicating a content of the comparison information in association with positions in the iris.Type: ApplicationFiled: December 14, 2021Publication date: March 31, 2022Applicant: NEC CorporationInventor: Mamoru INOUE
-
Publication number: 20220101584Abstract: Provided is an information processing system including: a comparison information acquisition unit that acquires comparison information regarding iris comparison generated based on an iris image including an iris of a recognition subject; and a display image generation unit that generates a display image including an image indicating a content of the comparison information in association with positions in the iris.Type: ApplicationFiled: December 14, 2021Publication date: March 31, 2022Applicant: NEC CorporationInventor: Mamoru INOUE
-
Publication number: 20220101585Abstract: Provided is an information processing system including: a comparison information acquisition unit that acquires comparison information regarding iris comparison generated based on an iris image including an iris of a recognition subject; and a display image generation unit that generates a display image including an image indicating a content of the comparison information in association with positions in the iris.Type: ApplicationFiled: December 14, 2021Publication date: March 31, 2022Applicant: NEC CorporationInventor: Mamoru INOUE
-
Publication number: 20220101586Abstract: Example methods for generating an animated character in dance poses to music may include generating, by at least one processor, a music input signal based on an acoustic signal associated with the music, and receiving, by the at least one processor, a model output signal from an encoding neural network. A current generated pose data is generated using a decoding neural network, the current generated pose data being based on previous generated pose data of a previous generated pose, the music input signal, and the model output signal. An animated character is generated based on a current generated pose data; and the animated character caused to be displayed by a display device.Type: ApplicationFiled: September 28, 2021Publication date: March 31, 2022Inventors: Gurunandan Krishnan Gorumkonda, Hsin-Ying Lee, Jie Xu
-
Publication number: 20220101587Abstract: Systems and methods can enable the control the motion of an animated character based on imagery (e.g., captured by an image capture device such as a web camera or “webcam”) which shows a person in motion. Specifically, the animated character can be automatically rendered to have the same motion as the entity shown in the imagery (e.g., in real time). According to one aspect of the present disclosure, the animated character can be rendered by iteratively transforming (e.g., including deforming the actual geometry of) a vector-based surface illustration. Specifically, the systems and methods present disclosure can leverage the scalable and transformable nature of a vector-based surface illustration to provide more realistic motion-controlled animation, in which the underlying geometry of the animated character is able to be adjusted to imitate human motion more realistically (e.g., as opposed to basic rotations of fixed character geometry).Type: ApplicationFiled: September 30, 2020Publication date: March 31, 2022Inventor: Shan Huang
-
Publication number: 20220101588Abstract: Techniques for establishing a biomechanical model of a user performing motion are described. A plurality of sensor modules corresponding to a set of designated body parts (e.g., arms or legs) of the user generate sensing signals when the user performs a pose. Sensing data including accelerometers and/or gyroscopes data generated from the sensing signals is analyzed to detect a medio-lateral direction of each of the designated body parts to infer the pose from the captured motion of the user.Type: ApplicationFiled: November 22, 2021Publication date: March 31, 2022Inventors: Pietro Garofalo I. Garofalo, Gabriele Ligorio, Michele Raggi, Josh Sole, Wade Lagrone, Joseph Chamdani
-
Publication number: 20220101589Abstract: A system provides the ability to import large engineering 3D models from a primary 3D rendering software into a secondary 3D rendering software that does not have the tools of the resources to render the larger 3D model on its own. The system uses a plugin to combine 3D data from the two software sources, and then return the combined 3D data to the secondary 3D rendering software. Components of the system can be remote or cloud based, and the system facilitates video streaming of 3D rendered models that can be manipulated on any computer capable of supporting a video stream.Type: ApplicationFiled: July 7, 2021Publication date: March 31, 2022Inventors: David Matthew Stevenson, Chase Laurendine, Paul Antony Burton
-
Publication number: 20220101590Abstract: A system and method for performing intersection testing of rays in a ray tracing system. The ray tracing system uses a hierarchical acceleration structure comprising a plurality of nodes, each identifying one or more elements able to be intersected by a ray. The system makes use of a serial-mode ray intersection process, in which, when a ray intersects a bounding volume, a limited number of new ray requests are generated.Type: ApplicationFiled: September 30, 2021Publication date: March 31, 2022Inventor: Daniel Barnard
-
Publication number: 20220101591Abstract: A system and method for performing intersection testing of rays in a ray tracing system. The ray tracing system uses a hierarchical acceleration structure comprising a plurality of nodes, each identifying one or more elements able to be intersected by a ray. The system makes use of a serial-mode ray intersection process, in which, when a ray intersects a bounding volume, a limited number of new ray requests are generated.Type: ApplicationFiled: September 30, 2021Publication date: March 31, 2022Inventor: Daniel Barnard
-
Publication number: 20220101592Abstract: A system and method for performing intersection testing of rays in a ray tracing system. The ray tracing system uses a hierarchical acceleration structure comprising a plurality of nodes, each identifying one or more elements able to be intersected by a ray. The system iteratively obtains ray requests, each of which identifies a ray and a node against which the ray is to be tested, and performs intersection testing based on the ray requests. The number of ray requests obtained in each iteration reduces responsive to an amount of memory occupied by information relating to the rays (undergoing intersection testing) increasing.Type: ApplicationFiled: September 30, 2021Publication date: March 31, 2022Inventor: Daniel Barnard
-
Publication number: 20220101593Abstract: A computer system, while displaying a first computer-generated experience with a first level of immersion, receives biometric data corresponding to a first user. In response to receiving the biometric data: in accordance with a determination that the biometric data corresponding to the first user meets first criteria, the computer system displays the first computer-generated experience with a second level of immersion, wherein the first computer-generated experience displayed with the second level of immersion occupies a larger portion of a field of view of the first user than the first computer-generated experience displayed with the first level of immersion; and in accordance with a determination that the biometric data corresponding to the first user does not meet the first criteria, the computer system continues to display the first computer-generated experience with the first level of immersion.Type: ApplicationFiled: September 23, 2021Publication date: March 31, 2022Inventors: Philipp Rockel, Stephen O. Lemay, William A. Sorrentino, III, Giancarlo Yerkes, Nicholas W. Henderson, Gary I. Butcher, Richard R. Dellinger, Jonathan Ive, Alan C. Dye, Julian Jaede, Julian Hoenig, M. Evans Hankey, Wan Si Wan
-
Publication number: 20220101594Abstract: Methods, devices, and apparatuses are provided to facilitate a positioning of an item of virtual content in an extended reality environment. For example, a placement position for an item of virtual content can be transmitted to one or more of a first device and a second device. The placement position can be based on correlated map data generated based on first map data obtained from the first device and second map data obtained from the second device. In some examples, the first device can transmit the placement position to the second device.Type: ApplicationFiled: November 23, 2021Publication date: March 31, 2022Inventors: Pushkar GORUR SHESHAGIRI, Pawan Kumar BAHETI, Ajit Deepak GUPTE, Sandeep Kanakapura LAKSHMIKANTHA
-
Publication number: 20220101595Abstract: A system and method for generating a full three-dimensional (3D) digital geometry for an ear provides a 3D reconstruction model that not only includes the outer area of an ear, but also the inner canal area up to the second bend. The system includes at least one remote server and a PC device. The remote server manages and processes data needed for the 3D model for an ear. The PC device allows a user to access the system and method. The remote server processes a series of video frames to generate outer ear 3D data points. Full ear geometry 3D data points are generated by comparing the outer ear 3D data points with a plurality of ear impressions. Canal 3D data points are then adjusted with a tragus angle for accuracy. The 3D reconstruction model that includes the outer ear area and inner canal area is finally generated.Type: ApplicationFiled: September 29, 2020Publication date: March 31, 2022Inventor: Steven Yi
-
Publication number: 20220101596Abstract: A messaging system performs image processing to relight objects with neural networks for images provided by users of the messaging system. A method of relighting objects with neural networks includes receiving an input image with first lighting properties comprising an object with second lighting properties and processing the input image using a convolutional neural network to generate an output image with the first lighting properties and comprising the object with third lighting properties, where the convolutional neural network is trained to modify the second lighting properties to be consistent with lighting conditions indicated by the first lighting properties to generate the third lighting properties.Type: ApplicationFiled: July 26, 2021Publication date: March 31, 2022Inventors: Yuriy Volkov, Egor Nemchinov, Gleb Dmukhin
-
Publication number: 20220101597Abstract: An apparatus to facilitate inferred object shading is disclosed. The apparatus comprises one or more processors to receive rasterized pixel data and hierarchical data associated with one or more objects and perform an inferred shading operation on the rasterized pixel data, including using one or more trained neural networks to perform texture and lighting on the rasterized pixel data to generate a pixel output, wherein the one or more trained neural networks uses the hierarchical data to learn a three-dimensional (3D) geometry, latent space and representation of the one or more objects.Type: ApplicationFiled: September 25, 2020Publication date: March 31, 2022Applicant: Intel CorporationInventors: Selvakumar Panneer, Mrutunjayya Mrutunjayya, Carl S. Marshall, Ravishankar Iyer, Zack Waters
-
Publication number: 20220101598Abstract: An image processing method may include steps of obtaining a plurality of image data from an object, generating, from the image data, a three-dimensional (3-D) model having at least two different representation modes, and displaying the generated 3-D model. There is an advantage in that a user can check various information of a 3-D model without changing a representation mode by displaying a 3-D model having at least two different representation modes.Type: ApplicationFiled: September 24, 2021Publication date: March 31, 2022Applicant: MEDIT CORP.Inventors: Myoung Woo SONG, Beom Sik SUH
-
Publication number: 20220101599Abstract: Systems and methods are disclosed for recommending products or services by receiving a three-dimensional (3D) model of one or more products; performing motion tracking and understanding an environment with points or planes and estimating light or color in the environment; and projecting the product in the environment.Type: ApplicationFiled: December 9, 2021Publication date: March 31, 2022Inventor: Bao Tran
-
Publication number: 20220101600Abstract: Systems and methods for identifying travel way features in real time are provided. A method can include receiving two-dimensional and three-dimensional data associated with the surrounding environment of a vehicle. The method can include providing the two-dimensional data as one or more input into a machine-learned segmentation model to output a two-dimensional segmentation. The method can include fusing the two-dimensional segmentation with the three-dimensional data to generate a three-dimensional segmentation. The method can include storing the three-dimensional segmentation in a classification database with data indicative of one or more previously generated three-dimensional segmentations. The method can include providing one or more datapoint sets from the classification database as one or more inputs into a machine-learned enhancing model to obtain an enhanced three-dimensional segmentation.Type: ApplicationFiled: December 10, 2021Publication date: March 31, 2022Inventors: Raquel Urtasun, Min Bai, Shenlong Wang
-
Publication number: 20220101601Abstract: A system and method for scanning an environment and generating an annotated 2D map is provided. The system includes a 2D scanner having a light source, an image sensor and a first controller. The first controller determines a distance value to at least one of the object points. The system further includes a 360° camera having a movable platform, and a second controller that merges the images acquired by the cameras to generate an image having a 360° view in a horizontal plane. The system also includes processors coupled to the 2D scanner and the 360° camera. The processors are responsive to generate a 2D map of the environment based at least in part on a signal from an operator and the distance value. The processors being further responsive for acquiring a 360° image and integrating it at a location on the 2D map.Type: ApplicationFiled: October 7, 2021Publication date: March 31, 2022Inventors: Aleksej Frank, Matthias Wolke, Oliver Zweigle
-
Publication number: 20220101602Abstract: A method for generating a three-dimensional working surface of a human body includes receiving input data corresponding to images; generating a first point cloud from the input data, each point being associated with a three-dimensional spatial coordinate; determining a plurality of attributes at each point of the first point cloud, calculating a set of geometric parameters from a regression carried out from a series of matrix operations performed according to different layers of a neural network trained from a plurality of partial views of parametric models parameterised with different parameterisation configurations; determining a parameterised model to generate a body model of a body including a first meshing.Type: ApplicationFiled: March 19, 2020Publication date: March 31, 2022Inventors: Tanguy SERRAT, Ali KHACHLOUF
-
Publication number: 20220101603Abstract: An electronic device for object rigging includes a processor. The processor is configured to obtain a three-dimensional (3D) scan of an object. The processor is also configured to identify 3D coordinates associated with joints of the 3D scan. The processor is further configured to identify parameters associated with fitting a 3D parametric body model to the 3D scan based on the 3D coordinates of the joints. Additionally, the processor is configured to modify the parameters to reduce 3D joint errors between the 3D coordinates associated with the joints on the 3D scan and the 3D parametric body model. The processor is also configured to generate a rigged 3D scan based on the modified parameters, for performing an animated motion.Type: ApplicationFiled: September 27, 2021Publication date: March 31, 2022Inventors: Saifeng Ni, Zhipeng Fan
-
Publication number: 20220101604Abstract: Disclosed herein are a learning-based three-dimensional (3D) model creation apparatus and method. A method for operating a learning-based 3D model creation apparatus includes generating multi-view feature images using supervised learning, creating a three-dimensional (3D) mesh model using a point cloud corresponding to the multi-view feature images and a feature image representing internal shape information, generating a texture map by projecting the 3D mesh model into three viewpoint images that are input, and creating a 3D model using the texture map.Type: ApplicationFiled: December 14, 2021Publication date: March 31, 2022Applicant: Electronics and Telecommunications Research InstituteInventors: Seong-Jae LIM, Tae-Joon KIM, Seung-Uk YOON, Seung-Wook LEE, Bon-Woo HWANG, Jin-Sung CHOI
-
Publication number: 20220101605Abstract: In implementations of systems for shading vector objects, a computing device implements a shading system which detects points along a boundary of a vector-based object. The shading system forms a two-dimensional mesh based on the detected points. The shading system generates a three-dimensional mesh by inflating the two-dimensional mesh based on a geometry of the vector-based object. Color values are applied to a shading mesh based on locations of vertices of the three-dimensional mesh. The shading system generates a shaded vector-based object by blending the vector-based object and the shading mesh.Type: ApplicationFiled: September 28, 2020Publication date: March 31, 2022Applicant: Adobe Inc.Inventors: Ankit Phogat, Vineet Batra
-
Publication number: 20220101606Abstract: The subject technology selects a set of augmented reality content generators from a plurality of augmented reality content generators. The subject technology causes display, at a client device, of a graphical interface comprising a plurality of selectable graphical items. The subject technology receives, at the client device, a selection of a first selectable graphical item from the plurality of selectable graphical items, the first selectable graphical item comprising a first augmented reality content generator corresponding to a particular geolocation. The subject technology causes display, at the client device, at least one augmented content reality item generated by the first augmented reality content generator.Type: ApplicationFiled: April 20, 2021Publication date: March 31, 2022Inventors: Ilteris Kaan Canberk, Virginia Drummond, Jean Luo, Alek Matthiessen, Celia Nicole Mourkogiannis
-
Publication number: 20220101607Abstract: Augmented reality systems and methods for creating, saving and rendering designs comprising multiple items of virtual content in a three-dimensional (3D) environment of a user. The designs may be saved as a scene, which is built by a user from pre-built sub-components, built components, and/or previously saved scenes. Location information, expressed as a saved scene anchor and position relative to the saved scene anchor for each item of virtual content, may also be saved. Upon opening the scene, the saved scene anchor node may be correlated to a location within the mixed reality environment of the user for whom the scene is opened. The virtual items of the scene may be positioned with the same relationship to that location as they have to the saved scene anchor node. That location may be selected automatically and/or by user input.Type: ApplicationFiled: December 10, 2021Publication date: March 31, 2022Applicant: Magic Leap, Inc.Inventors: Jonathan Brodsky, Javier Antonio Busto, Martin Wilkins Smith
-
Publication number: 20220101608Abstract: Aspects of the present disclosure involve a system and a method for performing operations comprising: capturing, by a client device implementing a messaging application, an image; receiving, from a server, an identification of an augmented reality experience associated with the image, the server identifying the augmented reality experience by assigning visual words to features of the image and searching a visual search database to identify a plurality of marker images, and the server retrieving the augmented reality experience associated with a given one of the plurality of marker images; automatically, in response to capturing the image, displaying one or more graphical elements of the identified augmented reality experience; and receiving input from the user interacting with the one or more graphical elements.Type: ApplicationFiled: October 13, 2020Publication date: March 31, 2022Inventors: Hao Hu, Kevin Sarabia Dela Rosa, Bogdan Maksymchuk, Volodymyr Piven, Ekaterina Simbereva
-
Publication number: 20220101609Abstract: Aspects of the present disclosure involve a system and a method for performing operations comprising: capturing an image that depicts currency, receiving an identification of an augmented reality experience associated with the image, the server identifying the augmented reality experience by assigning visual words to features of the image and searching a visual search database to identify a plurality of marker images associated with one or more charities, and the server retrieving the augmented reality experience associated with a given one of the plurality of marker images; automatically, in response to capturing the image that depicts currency, displaying one or more graphical elements of the identified augmented reality experience that represent a charity of the one or more charities; and receiving a donation to the charity from a user of the client device in response to an interaction with the one or more graphical elements that represent the charity.Type: ApplicationFiled: October 13, 2020Publication date: March 31, 2022Inventors: Hao Hu, Kevin Sarabia Dela Rosa, Bogdan Maksymchuk, Volodymyr Piven, Ekaterina Simbereva
-
Publication number: 20220101610Abstract: The subject technology receives, at a client device, a selection of a first selectable graphical item, the first selectable graphical item comprising a first augmented reality content generator corresponding to a particular geolocation. The subject technology causes display, at the client device, of a graphical interface comprising a plurality of selectable augmented reality content items, each selectable augmented reality content item corresponding to a particular activity based at in part on the particular geolocation. The subject technology receives, at the client device, a second selection of a particular selectable augmented reality content item from the plurality of selectable augmented reality content items.Type: ApplicationFiled: January 12, 2021Publication date: March 31, 2022Inventors: Kaveh Anvaripour, Ilteris Kaan Canberk, Virginia Drummond, Jean Luo, Alek Matthiessen, Celia Nicole Mourkogiannis
-
Publication number: 20220101611Abstract: The present invention relates to an image output device mounted on a vehicle so as to implement augmented reality, and a control method therefor. At least one from among an autonomous vehicle, a user terminal, and a server of the present invention can be linked with an artificial intelligence module, a drone (unmanned aerial vehicle, UAV), a robot, an augmented reality (AR) device, a virtual reality (VR) device, a device related to a 5G service, and the like.Type: ApplicationFiled: January 30, 2020Publication date: March 31, 2022Inventors: Kihyung LEE, Yujung JANG, Dukyung JUNG
-
Publication number: 20220101612Abstract: In some embodiments, an electronic device automatically updates the orientation of a virtual object in a three-dimensional environment based on a viewpoint of a user in the three-dimensional environment. In some embodiments, an electronic device automatically updates the orientation of a virtual object in a three-dimensional environment based on viewpoints of a plurality of users in the three-dimensional environment. In some embodiments, the electronic device modifies an appearance of a real object that is between a virtual object and the viewpoint of a user in a three-dimensional environment. In some embodiments, the electronic device automatically selects a location for a user in a three-dimensional environment that includes one or more virtual objects and/or other users.Type: ApplicationFiled: September 17, 2021Publication date: March 31, 2022Inventors: Alexis Henri PALANGIE, Peter D. ANTON, Stephen O. LEMAY, Christopher D. MCKENZIE, Israel PASTRANA VICENTE
-
Publication number: 20220101613Abstract: A computer system in communication with one or more input devices displays a first view of a three-dimensional environment, including a representation of a physical environment and a first user interface object having a first surface at a first position in the three-dimensional environment corresponding to a first location in the physical environment. While displaying the first view, the computer system detects movement, in the physical environment, of a first person not using the one or more input devices, and in response, in accordance with a determination that the movement of the first person in the physical environment has a first spatial relationship to the first location in the physical environment corresponding to the first user interface object, the computer system moves the first surface of the first user interface object in the first view in accordance with the movement of the first person in the physical environment.Type: ApplicationFiled: September 23, 2021Publication date: March 31, 2022Inventor: Philipp Rockel
-
Publication number: 20220101614Abstract: The subject technology identifies a set of graphical elements in an augmented reality (AR) facial pattern. The subject technology determines at least one primitive shape based on the set of graphical elements. The subject technology generates a JavaScript Object Notation (JSON) file using at least one primitive shape. The subject technology generates internal facial makeup format (IFM) data using the JSON file. The subject technology publishes the IFM data to a product catalog service.Type: ApplicationFiled: September 29, 2021Publication date: March 31, 2022Inventors: Jean Luo, Ibram Uppal
-
Publication number: 20220101615Abstract: A computer system calibrates an image from digital motion video, which originated from a camera that has a view of a scene from a period of time, with an image rendered from a three-dimensional model of the scene for a view based on a location (position and orientation) of the camera in the scene. The calibrated image from the digital motion video can be overlaid on the rendered image in a graphical user interface. The graphical user interface can allow a user to modify opacity of the overlay of the calibrated image from the digital motion video on the rendered image. The overlaid image can be used as a guide by the user to provide inputs with respect to the three-dimensional model.Type: ApplicationFiled: December 9, 2021Publication date: March 31, 2022Inventor: Andrew Thomas Fredericks
-
Publication number: 20220101616Abstract: Systems and methods for conveying virtual content in an augmented reality environment comprising images of virtual content superimposed over physical objects and/or physical surroundings visible within a field of view of a user as if the images of the virtual content were present in the real world. Exemplary implementations may: obtain user information for a user associated with a presentation device physically present at a location of the system; compare the user information with the accessibility criteria for the virtual content to determine whether any portions of the virtual content are to be presented to the user based on the accessibility criteria and the user information for the user; and facilitate presentation of the virtual content to the user via presentation device of user based on the virtual content information, the field of view, and the correlations between the multiple linkage points and the reference frame of the virtual content.Type: ApplicationFiled: December 13, 2021Publication date: March 31, 2022Inventor: Nicholas T. Hariton
-
Publication number: 20220101617Abstract: A three-dimensional virtual endoscopy rendering of a lumen of a tubular structure is based on both non-spectral and spectral volumetric imaging data. The three-dimensional virtual endoscopy rendering includes a 2-D image of a lumen of the tubular structure from a viewpoint of a virtual camera of a virtual endoscope passing through the lumen. In one instance, the three-dimensional virtual endoscopy rendering is similar to the view which is provided by a physical endoscopic video camera inserted into the actual tubular structure and positioned at that location. The non-spectral volumetric image data is used to determine an opacity and shading of the three-dimensional virtual endoscopy rendering. The spectral volumetric image data is used to visually encode the three-dimensional virtual endoscopy rendering to visually distinguish an inner wall of the tubular structure and structure of interest on the wall.Type: ApplicationFiled: November 22, 2019Publication date: March 31, 2022Inventors: RAFAEL WIEMKER, MUKTA JOSHI, JORG SABCZYNSKI, TOBIAS KLINDER
-
Publication number: 20220101618Abstract: A method for displaying a superimposition of a second dental model on a first dental model of a patient's dentition, obtains a first 3D model of the patient's dentition and a segmentation of the first 3D model. A second 3D model of the patient's dentition and a segmentation of the second 3D model are obtained. A selected tooth is identified from the segmented teeth of the first and second 3D models. A post-treatment target position for the selected tooth of the first 3D model is determined according to a movement indication calculated for the selected tooth. The second 3D model is registered to the first 3D model based on the target position of the at least one selected tooth. An imposition of the second 3D model onto the first 3D model of the patient dentition is displayed.Type: ApplicationFiled: December 27, 2019Publication date: March 31, 2022Inventors: Delphine REYNARD, Pascal NARCISSE, Xavier RIPOCHE, Jean-Pascal JACOB, Sabrina CAPRON-RICHARD
-
Publication number: 20220101619Abstract: A content management system may maintain a scene description that represents a 3D virtual environment and a publish/subscribe model in which clients subscribe to content items that correspond to respective portions of the shared scene description. When changes are made to content, the changes may be served to subscribing clients. Rather than transferring entire descriptions of assets to propagate changes, differences between versions of content may be exchanged, which may be used construct updated versions of the content. Portions of scene description may reference other content items and clients may determine whether to request and load these content items for lazy loading. Content items may be identified by Uniform Resource Identifiers (URIs) used to reference the content items. The content management system may maintain states for client connections including for authentication, for the set of subscriptions in the publish/subscribe model, and for their corresponding version identifiers.Type: ApplicationFiled: December 3, 2021Publication date: March 31, 2022Inventors: Rev Lebaredian, Michael Kass, Brian Harris, Andrey Shulzhenko, Dmitry Duka
-
Publication number: 20220101620Abstract: A method for interactive display of image positioning includes: a positioning point is obtained in response to a selection operation of a target object; an interactive object displayed at a position corresponding to the positioning point in each of a plurality of operation areas is obtained according to correspondence relationships between the plurality of operating areas with respect to the positioning point.Type: ApplicationFiled: December 10, 2021Publication date: March 31, 2022Inventor: Liwei ZHANG
-
Publication number: 20220101621Abstract: Various aspects of the subject technology relate to systems, methods, and machine-readable media for virtual try-on of items such as spectacles. A virtual try-on interface may be implemented at a server or at a user device, and may use collision detection between three-dimensional models of the spectacles and of a user's face and head to determine the correct size and position of the spectacles for virtual try-on. With the determined size and position, a virtual representation of the spectacles is superimposed on an image of the user.Type: ApplicationFiled: December 13, 2021Publication date: March 31, 2022Applicant: WARBY PARKER INC.Inventors: David Goldberg, Michael Rakowski, Benjamin Cohen, Ben Hall, Brian Bernberg, Hannah Zachritz