Patents Issued in November 12, 2020
-
Publication number: 20200357127Abstract: A tubular region acquisition unit acquires a first stent region and a second stent region from three-dimensional images. A model setting unit sets a first tubular model and a second tubular model that represent a surface shape of a stent, respectively, for the first stent region and the second stent region. A corresponding point setting unit sets a plurality of corresponding points that correspond to each other, respectively, for the first tubular model and the second tubular model. A first registration unit registers the first tubular model and the second tubular model on the basis of the corresponding points to obtain a first registration result.Type: ApplicationFiled: July 24, 2020Publication date: November 12, 2020Applicant: FUJIFILM CorporationInventor: Futoshi SAKURAGI
-
Publication number: 20200357128Abstract: Methods, systems, devices and computer software/program code products enable reconstruction of synthetic images of a scene from the perspective of a virtual camera having a selected virtual camera position, based on images of the scene captured by a number of actual, physical cameras.Type: ApplicationFiled: February 22, 2018Publication date: November 12, 2020Inventors: JAMES A. MCCOMBE, CHRISTOPH BIRKHOLD
-
Publication number: 20200357129Abstract: Systems and methods of proximity detection for a rack enclosure are disclosed. An example system may comprise, extracting, at a processor, a boundary mask image from a captured image, performing, at a processor, image correction operations on the boundary mask image, processing, at a processor, the boundary mask image utilizing image processing operations to determine a corrected boundary mask image, determining, at a processor, a mesh of image segments based on the corrected boundary mask image, establishing, at a processor, one or more baseline image metrics of the mesh of the image segments, evaluating, at a processor, the one or more baseline image metrics for changes with operational image segment characteristics, and communicating, at a processor, any baseline image metric changes to a management device.Type: ApplicationFiled: April 18, 2018Publication date: November 12, 2020Inventors: Stephen Paul Linder, Kesavan Yogeswaran
-
Publication number: 20200357130Abstract: A system and method for recognizing an object. The system includes an imaging apparatus for capturing an image of an object and a processor for receiving the captured image of the object, and for, when it is determined that fewer than a predetermined number of objects have been previously imaged, for determining whether the image of the captured object includes one or more characteristics determined to be similar to a same characteristic in a group of previously imaged objects so that the captured image is grouped with the previously imaged objects, or whether the image of the captured object includes one or more characteristics determined to be dissimilar to a same characteristic in a group of previously imaged objects so that the captured image is not grouped with the previously imaged objects, and the image of the captured object starts another group of previously imaged objects.Type: ApplicationFiled: May 20, 2020Publication date: November 12, 2020Inventors: Daniel Glasner, Lei Guan, Adam Hanina, Li Zhang
-
Publication number: 20200357131Abstract: Exemplary systems and methods perform post-processing operations on computer vision model detections of objects of interest in geospatial imagery to detect and assign attributes to the detected objects of interest. For example, an exemplary post-processing system correlates multiple detections, made by a computer vision model, of an object of interest depicted within a set of images of a geospatial location, determines, based on the correlated detections, an attribute of the object of interest depicted within the set of images of the geospatial location, and selects the attribute for inclusion in a dataset for the object of interest. Corresponding methods and systems are also disclosed.Type: ApplicationFiled: July 31, 2020Publication date: November 12, 2020Inventors: Joshua Eno, Lindy Williams, Wesley Boyer, Kevin Brackney, Bergen Davell, Khoa Truong Pham
-
Publication number: 20200357132Abstract: Described herein are platforms, systems, media, and methods for measuring a space by launching an active augmented reality (AR) session on a device comprising a camera and at least one processor; calibrating the AR session by establishing a fixed coordinate system, receiving a position and orientation of one or more horizontal or vertical planes in the space in reference to the fixed coordinate system, and receiving a position and orientation of the camera in reference to the fixed coordinate system; constructing a backing model; providing an interface allowing a user to capture at least one photo of the space during the active AR session; extracting camera data from the AR session for the at least one photo; extracting the backing model from the AR session; and storing the camera data and the backing model in association with the at least one photo.Type: ApplicationFiled: May 8, 2020Publication date: November 12, 2020Inventors: Dejan JOVANOVIC, Andrew Kevin GREFF
-
Publication number: 20200357133Abstract: A system for surveying a surface (2) to measure a physical or chemical property associated with the surface. The system includes a handheld probe (4) measuring a physical or chemical property at locations over a surface (2). The video camera (12) captures video data of a user (6) using the handheld probe (4) to survey the surface. The depth sensing device (14) measures the distance to the handheld probe (4). Processing circuitry identifies the handheld probe from the video data and determines the position of the handheld probe (4) relative to the surface (2). A data recorder and/or a data transmitter records and/or transmits data representative of the physical or chemical property measured by the handheld probe (4) and data representative of the associated position of the handheld probe, when the handheld probe (4) is determined to be less than a threshold distance from the surface (2) being surveyed.Type: ApplicationFiled: February 8, 2019Publication date: November 12, 2020Inventors: Michael Davies, Gary Bethel, Robert Clark, Dominique Rothan
-
Publication number: 20200357134Abstract: A Position and Orientation Measurement Engine (POME) is a mobile camera system that can be used for accurate indoor measurement (e.g., at a construction site). The POME uses a plurality of cameras to acquire images of a plurality of targets. If locations of the plurality of targets are precisely known, images of the targets can be used to determine a position of the POME in relation to the plurality of targets. However, to precisely determine locations of the plurality of targets can be time consuming and/or use expensive equipment. This disclosure discusses how to use the POME itself to determine locations of the plurality of targets.Type: ApplicationFiled: May 9, 2019Publication date: November 12, 2020Inventors: Young Jin Lee, Kent Kahle, Malte Seidler
-
Publication number: 20200357135Abstract: Embodiments of the disclosure provide an indoor positioning device, a movable device including the same, a method for positioning a movable device in an indoor space and a computer-readable medium. The indoor positioning device includes an imaging unit for capturing image information of at least one of a plurality of luminaires which are located at a top of the indoor space, a storage unit for storing information of a luminaire Voronoi diagram includes a plurality of Voronoi diagram units, each Voronoi diagram unit includes a generator which is located in a projection of a respective luminaire of the plurality of luminaires on a horizontal plane of the indoor space, and a processor for receiving the image information and the information of the luminaire Voronoi diagram, and calculate a position of the imaging unit in the indoor space based on the image information and the information of the luminaire Voronoi diagram.Type: ApplicationFiled: January 16, 2019Publication date: November 12, 2020Inventor: Cheng LIU
-
Publication number: 20200357136Abstract: A method for determining a pose of an image capturing device is performed at an electronic device. The electronic device acquires a plurality of image frames captured by the image capturing device, extracts a plurality of matching feature points from the plurality of image frames and determines first position information of each of the matching feature points in each of the plurality of image frames. After estimating second position information of each of the matching feature points in a current image frame in the plurality of image frames by using the first position information of each of the matching feature points extracted from a previous image frame in the plurality of image frames, the electronic device determines a pose of the image capturing device based on the first position information and the second position information of each of the matching feature points in the current image frame.Type: ApplicationFiled: July 27, 2020Publication date: November 12, 2020Inventors: Liang Qiao, Xiangkai Lin, Linchao Bao, Yonggen Ling, Fengming Zhu
-
Publication number: 20200357137Abstract: Various embodiments include a method for determining a pose of an object in its surroundings comprising: using an optical capture device to capture the object and its surroundings as current recording; determining the pose of the object using optical image analysis; and using a neural network to ascertain the pose of the object. The neural network is taught with multi-task learning using pose regression and descriptor learning using a triplet-wise loss function and a pair-wise loss function. The pose regression uses quaternions. Determining the triplet-wise loss function includes using a dynamic margin term. Determining the pair-wise loss function includes an anchoring function.Type: ApplicationFiled: December 18, 2018Publication date: November 12, 2020Applicant: Siemens AktiengesellschaftInventors: Sergey Zakharov, Shadi Albarqouni, Linda Mai Bui, Slobodan Ilic
-
Publication number: 20200357138Abstract: The present disclosure relates to a vehicle-mounted camera self-calibration method and apparatus, and a vehicle driving method and apparatus. The method comprises: starting self-calibration of a vehicle-mounted camera to enable a vehicle on which the vehicle-mounted camera is mounted to be in a traveling state; acquiring, by the vehicle-mounted camera in a traveling process of the vehicle, information required for self-calibration of the vehicle-mounted camera; and self-calibrating the vehicle-mounted camera based on the acquired information.Type: ApplicationFiled: July 30, 2020Publication date: November 12, 2020Applicant: Shanghai SenseTime Intelligent Technology Co., Ltd.Inventors: Jie Xiang, Ningyuan Mao, Haibo Zhu
-
Publication number: 20200357139Abstract: A method for determining blockage of a vehicular camera includes providing a camera and mounting the camera at a vehicle so as to view exterior of the vehicle. The control determines at least one selected from the group consisting of (i) that the imaging sensor is totally blocked by determining that the count of bright photosensor pixels of the camera's imaging sensor remains below a threshold, and (ii) that the imaging sensor is partially blocked by determining continuity of intensity variations in different regions of the imaging sensor. The control, responsive to determination of either total blockage or partial blockage of the imaging sensor of the camera, adapts image processing by the image processor of frames of image data captured by the camera to accommodate (i) the determined total blockage of the imaging sensor of the camera or (ii) the determined partial blockage of the imaging sensor of the camera.Type: ApplicationFiled: July 27, 2020Publication date: November 12, 2020Inventors: Yuesheng Lu, Michael J. Higgins-Luthman, Antony V. Jeyaraj, Manoj R. Phirke
-
Publication number: 20200357140Abstract: This disclosure is directed to calibrating sensors mounted on an autonomous vehicle. First image data and second image data representing an environment can be captured by first and second cameras, respectively (and or a single camera at different points in time). Point pairs comprising a first point in the first image data and a second point in the second image data can be determined and projection errors associated with the points can be determined. A subset of point pairs can be determined, e.g., by excluding point pairs with the highest projection error. Calibration data associated with the subset of points can be determined and used to calibrate the cameras without the need for calibration infrastructure.Type: ApplicationFiled: July 27, 2020Publication date: November 12, 2020Inventor: Till Kroeger
-
Publication number: 20200357141Abstract: A method of calibrating an imaging system may includes capturing images using at least one imaging device, identifying feature points in the images, identifying calibration points from among the feature points, and determining a posture of the at least one imaging device or a different imaging device based on positions of the calibration points in the images.Type: ApplicationFiled: July 23, 2020Publication date: November 12, 2020Inventors: You ZHOU, Jianzhao CAI, Bin XU
-
Publication number: 20200357142Abstract: Techniques are disclosed for image matting. In particular, embodiments decompose the matting problem of estimating foreground opacity into the targeted subproblems of estimating a background using a first trained neural network, estimating a foreground using a second neural network and the estimated background as one of the inputs into the second neural network, and estimating an alpha matte using a third neural network and the estimated background and foreground as two of the inputs into the third neural network. Such a decomposition is in contrast to traditional sampling-based matting approaches that estimated foreground and background color pairs together directly for each pixel. By decomposing the matting problem into subproblems that are easier for a neural network to learn compared to traditional data-driven techniques for image matting, embodiments disclosed herein can produce better opacity estimates than such data-driven techniques as well as sampling-based and affinity-based matting approaches.Type: ApplicationFiled: May 9, 2019Publication date: November 12, 2020Inventors: Tunc Ozan AYDIN, Ahmet Cengiz ÖZTIRELI, Jingwei TANG, Yagiz AKSOY
-
Publication number: 20200357143Abstract: A method, apparatus and system for visual localization includes extracting appearance features of an image, extracting semantic features of the image, fusing the extracted appearance features and semantic features, pooling and projecting the fused features into a semantic embedding space having been trained using fused appearance and semantic features of images having known locations, computing a similarity measure between the projected fused features and embedded, fused appearance and semantic features of images, and predicting a location of the image associated with the projected, fused features. An image can include at least one image from a plurality of modalities such as a Light Detection and Ranging image, a Radio Detection and Ranging image, or a 3D Computer Aided Design modeling image, and an image from a different sensor, such as an RGB image sensor, captured from a same geo-location, which is used to determine the semantic features of the multi-modal image.Type: ApplicationFiled: October 29, 2019Publication date: November 12, 2020Inventors: Han-Pang Chiu, Zachary Seymour, Karan Sikka, Supun Samarasekera, Rakesh Kumar, Niluthpol Mithun
-
Publication number: 20200357144Abstract: An image processing apparatus includes a memory, and a processor coupled to the memory and configured to obtain an image of a first camera disposed on a vehicle such that an imaging range of the first camera includes a portion of a vehicle body and images of a second and third cameras that image an area of a blind spot caused by the portion of the vehicle body, combine a blind spot image corresponding to the area of the blind spot in the image of the second camera with the image of the first camera, obtain three-dimensional information of a rear vehicle with stereo vision of the second and third cameras, and store rear vehicle shape information, wherein the processor determines three-dimensional information corresponding to a portion of the rear vehicle, and combines the blind spot image with the image of the first camera using the determined three-dimensional information.Type: ApplicationFiled: March 30, 2020Publication date: November 12, 2020Inventor: Ryohei SUDA
-
Publication number: 20200357145Abstract: An apparatus for providing a top view image of a parking space includes a top view image generating device that generates a top view image of a parking space, a display that displays the top view image generated by the top view image generating device, and a controller that captures the top view image displayed by the display and connects the top view image previously captured to a current top view image generated by the top view image generating device to generate a combined top view image of an entire parking space.Type: ApplicationFiled: October 22, 2019Publication date: November 12, 2020Inventor: Yong Joon Lee
-
Publication number: 20200357146Abstract: In some embodiments, a computing system generates a color gradient for data visualizations by displaying a color selection design interface. The computing system receives a user input identifying a start point of a color map path and an end point of a color map path. The computing system draws a color map path within the color space element between the start point and the end point constrained to traverse colors having uniform transitions between one or more of lightness, chroma, and hue. The computing system selects a color gradient having a first color corresponding to the start point of the color map path and a second color corresponding to the end point of the color map path, and additional colors corresponding to additional points along the color map path. The computing system generates a color map for visually representing a range of data values.Type: ApplicationFiled: May 9, 2019Publication date: November 12, 2020Inventors: Jose Ignacio Echevarria Vallespi, Stephen DiVerdi, Hema Susmita Padala, Bernard Kerr, Dmitry Baranovskiy
-
Publication number: 20200357147Abstract: A method relating to an image fusion includes acquiring a thermal infrared image and a visible image. The method also includes receiving a fusion parameter corresponding to a color space and generating, based on the fusion parameter, a fused image of the thermal infrared image and the visible image. The method further includes receiving a regulation parameter, the regulation parameter including a color scheme or a partial contrast, and adjusting the fused image according to the regulation parameter.Type: ApplicationFiled: June 22, 2020Publication date: November 12, 2020Applicant: ZHEJIANG DAHUA TECHNOLOGY CO., LTD.Inventors: Wei LU, Qiankun LI
-
Publication number: 20200357148Abstract: Tomographic images, acquired by iterative reconstruction of lower quality images, are enhanced by a trained neural network. Next, the enhanced tomographic images are input to the next step of the iterative reconstruction. For this purpose, one or several neural networks are trained with a first set of tomographic images and a second set of tomographic images at lower quality. The second set of tomographic images at lower quality are acquired by applying an iterative reconstruction algorithm to lower quality projection images. The iterative reconstruction can use a normal quality tomographic image as input.Type: ApplicationFiled: August 21, 2018Publication date: November 12, 2020Inventors: Joris SOONS, Adriyana DANUDIBROTO, Jeroen CANT
-
Publication number: 20200357149Abstract: An method according to an embodiment divides k-space data into a k-space central segment and a k-space peripheral segment by segment. The method acquires the k-space central segment in a first time interval and acquires the k-space peripheral segment in a second time interval different from the first time interval. The method reconstructs an MR (Magnetic Resonance) image from k-space data obtained by combining data on the acquired k-space central segment and data on the acquired k-space peripheral segment. Furthermore, the first time interval includes a plurality of cardiac cycles. The k-space central segment is repeatedly acquired over the cardiac cycles. As a central segment of k-space data used to reconstruct the MR image, data in a cardiac cycle less affected by an arrhythmia among the cardiac cycles is selected.Type: ApplicationFiled: March 25, 2020Publication date: November 12, 2020Applicant: CANON MEDICAL SYSTEMS CORPORATIONInventors: Masaaki NAGASHIMA, Mark GOLDEN
-
Publication number: 20200357150Abstract: The present application provides a method and device for obtaining a predicted image of a truncated portion, an imaging method and system, and a non-transitory computer-readable storage medium. The method for obtaining a predicted image of a truncated portion comprises preprocessing projection data to obtain, by reconstruction, an initial image of the truncated portion; and calibrating the initial image based on a trained learning network to obtain the predicted image of the truncated portion.Type: ApplicationFiled: May 5, 2020Publication date: November 12, 2020Inventors: Xueli WANG, Bingjie ZHAO, Shiyu LI, Dan LIU
-
Publication number: 20200357151Abstract: The present application provides an imaging method and system, and a non-transitory computer-readable storage medium. The imaging method comprises preprocessing projection data to obtain a predicted image of a truncated portion; performing forward projection on the predicted image to obtain predicted projection data of the truncated portion; and performing image reconstruction using the projection data obtained by forward projection and projection data of an untruncated portion.Type: ApplicationFiled: May 4, 2020Publication date: November 12, 2020Inventors: Xueli WANG, Ximiao CAO, Bingjie ZHAO, Weiwei XING
-
Publication number: 20200357152Abstract: In a console according to an embodiment, a control unit functions as a generation unit that generates a tomographic image from a plurality of projection images, which have been captured by a radiation detector at each of a plurality of imaging positions with different irradiation angles, with radiation sequentially emitted from each of the plurality of imaging positions, using a reconstruction process. In addition, the control unit functions as a derivation unit that derives the degree of enhancement as a parameter value used in a frequency enhancement process which is an example of image processing for a tomographic image, on the basis of the image analysis result of a projection image corresponding to an irradiation angle of 0 degrees.Type: ApplicationFiled: July 29, 2020Publication date: November 12, 2020Inventor: Wataru FUKUDA
-
Publication number: 20200357153Abstract: A method may include obtaining a first set of projection data with respect to a first dose level; reconstructing, based on the first set of projection data, a first image; determining a second set of projection data based on the first set of projection data, the second set of projection data relating to a second dose level that is lower than the first dose level; reconstructing a second image based on the second set of projection data; and training a first neural network model based on the first image and the second image. In some embodiments, the trained first neural network model may be configured to convert a third image to a fourth image, the fourth image exhibiting a lower noise level and corresponding to a higher dose level than the third image.Type: ApplicationFiled: July 27, 2020Publication date: November 12, 2020Applicant: SHANGHAI UNITED IMAGING HEALTHCARE CO., LTD.Inventors: Qianlong ZHAO, Guotao QUAN, Xiang LI
-
Publication number: 20200357154Abstract: Embodiments disclosed herein provide methods and systems for producing wavy shapes, where the lines and/or curves that form the wavy shape are curvy and can appear hand drawn or scribbled. Initially, a shape is separated into one or more individual lines and/or one or more Bezier curves. Each original line is perturbed to produce a wavy line using bounding regions that constrain the amount of waviness produced in the line. Each Bezier curve is transformed into a wavy Bezier curve using bounding regions that constrain the amount of waviness produced in the Bezier curve. Each original line or Bezier curve can be modified to include, for example, one or more curves, one or more loops, a single arc, one or more spikes, and/or regular or irregular waviness.Type: ApplicationFiled: October 4, 2019Publication date: November 12, 2020Inventors: Jessica Chen, Anne M. Tesar, Joseph Patrick Bush, III
-
Publication number: 20200357155Abstract: In one embodiment, a method of presenting a computer-generated reality (CGR) file includes receiving a user input to present a CGR scene including one or more CGR objects, wherein the CGR scene is associated with a first anchor and a second anchor. The method includes capturing an image of a physical environment and determining that the image of the physical environment lacks a portion corresponding to the first anchor. The method includes detecting a portion of the image of the physical environment corresponding to the second anchor. The method includes, in response to determining that image of the physical environment lacks a portion corresponding to the first anchor and detecting a portion of the image of the physical environment corresponding to the second anchor, displaying the CGR scene at a location of the display corresponding to the second anchor.Type: ApplicationFiled: June 3, 2020Publication date: November 12, 2020Inventors: Tyler Casella, David Lui, Norman Nuo Wang, Xiao Jin Yu
-
Publication number: 20200357156Abstract: A map generation system, method and computer program product are provided to generate a shadow layer from a raster image that accurately represents the shadows of one or more buildings. In the context of a map generation system, the map generation system extracts pixel values from a raster image of one or more buildings and processes the pixel values so as to retain pixel values within a predefined range while eliminating other pixel values. The pixel values that are retained represent a shadow. The map generation system also modifies the a representation of the shadow by modifying the pixel values of respective pixels so as to have a shape corresponding to the shape of the one or more buildings. The map generation system causes presentation or storage of the building layer representing the one or more buildings and a shadow layer representing the shadow.Type: ApplicationFiled: July 29, 2020Publication date: November 12, 2020Applicant: Here Global B.V.Inventor: Priyank Sameer
-
Publication number: 20200357157Abstract: The present invention relates to method of generating training data for use in animating an animated object corresponding to a deformable object. The method comprises accessing a 3D model of the deformable object, defining a plurality of virtual cameras directed at the 3D model; varying adjustable controls of the 3D model to create a set of deformations on the 3D model. Then, for each deformation, capturing 2D projections of points at each virtual camera, combining the projections to form a vector of 2D point coordinates, generating a vector of 2D shape parameters from the point coordinates, and combining the shape parameters with the values of the adjustable controls for that deformation to form a training data item. The training data items are combined to form a training data set for use in training a learning algorithm for use in animating an animated object corresponding to the deformable object based on real deformations.Type: ApplicationFiled: November 15, 2018Publication date: November 12, 2020Inventors: Gareth Edwards, Jane Haslam, Steven Caulkin
-
Publication number: 20200357158Abstract: Described herein are methods and systems for remote visualization of three-dimensional (3D) animation. A sensor of a mobile device captures scans of non-rigid objects in a scene, each scan comprising a depth map and a color image. A server receives a first set of scans from the mobile device and reconstructs an initial model of the non-rigid objects using the first set of scans. The server receives a second set of scans. For each scan in the second set of one or more scans, the server determines an initial alignment between the depth map and the initial model. The server converts the depth map into a coordinate system of the initial model, and determines a displacement between the depth map and the initial model. The server deforms the initial model to the depth map using the displacement, and applies a texture to at least a portion of the deformed model.Type: ApplicationFiled: May 5, 2020Publication date: November 12, 2020Inventors: Xiang Zhang, Yasmin Jahir, Xin Hou, Ken Lee
-
Publication number: 20200357159Abstract: A hardware-based traversal coprocessor provides acceleration of tree traversal operations searching for intersections between primitives represented in a tree data structure and a ray. The primitives may include opaque and alpha triangles used in generating a virtual scene. The hardware-based traversal coprocessor is configured to determine primitives intersected by the ray, and return intersection information to a streaming multiprocessor for further processing. The hardware-based traversal coprocessor is configured to provide a deterministic result of intersected triangles regardless of the order that the memory subsystem returns triangle range blocks for processing, while opportunistically eliminating alpha intersections that lie further along the length of the ray than closer opaque intersections.Type: ApplicationFiled: July 2, 2020Publication date: November 12, 2020Inventors: Samuli Laine, Tero Karras, Greg Muthler, William Parsons Newhall, Ronald Charles Babich, Ignacio Llamas, John Burgess
-
Publication number: 20200357160Abstract: Various types of systems or technologies can be used to collect data in a 3D space. For example, LiDAR (light detection and ranging) and RADAR (radio detection and ranging) systems are commonly used to generate point cloud data for 3D space around vehicles, for such functions as localization, mapping, and tracking. This disclosure provides improved techniques for processing the point cloud data that has been collected. The improved techniques include mapping 3D point cloud data points into a 2D depth map, fetching a group of the mapped 3D point cloud data points that are within a bounded window of the 2D depth map; and generating geometric space parameters based on the group of the mapped 3D point cloud data points. The generated geometric space parameters may be used for object motion, obstacle detection, freespace detection, and/or landmark detection for an area surrounding a vehicle.Type: ApplicationFiled: July 24, 2020Publication date: November 12, 2020Inventors: Ishwar Kulkarni, Ibrahim Eden, Michael Kroepfl, David Nister
-
Publication number: 20200357161Abstract: An apparatus and method for efficiently reconstructing a BVH. For example, one embodiment of a method comprises: constructing an object bounding volume hierarchy (BVH) for each object in a scene, each object BVH including a root node and one or more child nodes based on primitives included in each object; constructing a top-level BVH using the root nodes of the individual object BVHs; performing an analysis of the top-level BVH to determine whether the top-level BVH comprises a sufficiently efficient arrangement of nodes within its hierarchy; and reconstructing at least a portion of the top-level BVH if a more efficient arrangement of nodes exists, wherein reconstructing comprises rebuilding the portion of the top-level BVH until one or more stopping criteria have been met, the stopping criteria defined to prevent an entire rebuilding of the top-level BVH.Type: ApplicationFiled: June 2, 2020Publication date: November 12, 2020Inventors: Carsten BENTHIN, Sven WOOP
-
Publication number: 20200357162Abstract: The present disclosure provides a modeling method, apparatus, device and storage medium of a dynamic cardiovascular system. The method includes: obtaining CMR data and CCTA data of a patient to be operated; constructing a dynamic ventricular model of the patient to be operated using the CMR data; constructing a dynamic heart model of the patient to be operated according to the dynamic ventricular model and a preset heart model; constructing a coronary artery model of the patient to be operated using the CCTA data; and constructing a dynamic cardiovascular system model of the patient to be operated according to the dynamic heart model and the coronary artery model, and constructing a personalized dynamic cardiovascular system model for different patients.Type: ApplicationFiled: May 8, 2020Publication date: November 12, 2020Inventors: SHUAI LI, AIMIN HAO, QINPING ZHAO
-
METHOD AND APPARATUS FOR ADJUSTING VIEWING ANGLE IN VIRTUAL ENVIRONMENT, AND READABLE STORAGE MEDIUM
Publication number: 20200357163Abstract: This disclosure discloses a method and an apparatus for adjusting a viewing angle in a virtual environment. The method includes: displaying a first viewing angle picture, the first viewing angle picture including a virtual object having a first orientation; receiving a drag instruction for a viewing angle adjustment control; adjusting the first viewing angle direction according to the drag instruction, to obtain a second viewing angle direction; and displaying a second viewing angle picture, the second viewing angle picture including the virtual object having the first orientation.Type: ApplicationFiled: July 23, 2020Publication date: November 12, 2020Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventor: Han WANG -
Publication number: 20200357164Abstract: There is described herein systems and methods for camera colliders and shot composition preservation within a 3D virtual environment that prevent a virtual procedural camera from getting stuck behind an object, or penetrating into an object, when filming a subject, while at the same time also maintaining the screen composition of the subject in the camera shot.Type: ApplicationFiled: July 27, 2020Publication date: November 12, 2020Inventors: Adam Myhill, Gregory Labute
-
Publication number: 20200357165Abstract: Systems and methods for rendering an Augmented Reality (“AR”) object. A method for an AR object includes overlaying a first bitmap of an AR object rendered by a first virtual camera initialized by a server on a first display of a client device and overlaying a second bitmap of the AR object rendered by a second virtual camera initialized by the server on a second display of the client device. The first bitmap and the second bitmap appear as the AR object being located at a depth distance from the client device in a stereoscopic view.Type: ApplicationFiled: May 8, 2019Publication date: November 12, 2020Inventors: Pawan K. Dixit, Mudit Mehrotra
-
Publication number: 20200357166Abstract: Methods of modeling a three-dimensional (3D) object are provided. A method of modeling a 3D object includes generating a 3D model of at least a portion of the 3D object based on data from a plurality of two-dimensional (2D) images. Moreover, the method includes scaling the 3D model by estimating a distance to the 3D object. Related devices and computer program products are also provided.Type: ApplicationFiled: February 23, 2018Publication date: November 12, 2020Inventors: Daniel LINÅKER, Pal SZASZ
-
Publication number: 20200357167Abstract: Rendering shadows of transparent objects using ray tracing in real-time is disclosed. For each pixel in an image, a ray is launched towards the light source. If the ray intersects a transparent object, lighting information (e.g., color, brightness) is accumulated for the pixel. A new ray is launched from the point of intersection, either towards the light source or in a direction based on reflection/refraction from the surface. Ray tracing continues recursively, accumulating lighting information at each transparent object intersection. Ray tracing terminates when a ray intersects an opaque object, indicating a dark shadow. Ray tracing also terminates when a ray exits the scene without intersecting an object, where the accumulated lighting information is used to render a shadow for the pixel location. Soft shadows can be rendered using the disclosed technique by launching a plurality of rays in different directions based on a size of the light source.Type: ApplicationFiled: July 23, 2020Publication date: November 12, 2020Inventor: Karl Henrik Halén
-
Publication number: 20200357168Abstract: An exemplary object modeling system determines a set of directional occlusion values associated with a surface point on a surface of a virtual object. The directional occlusion values are representative of an exposure of the surface point to ambient light from each direction of a set of directions defined by a radiosity basis. The object modeling system also stores the set of directional occlusion values as part of texture data defining the surface point and provides the texture data that includes the set of stored directional occlusion values associated with the surface point. Corresponding methods and systems are also disclosed.Type: ApplicationFiled: July 30, 2020Publication date: November 12, 2020Inventors: Bradley G. Anderegg, Oliver S. Castaneda
-
Publication number: 20200357169Abstract: In one embodiment, a method of generating a 3D object is performed by a device including a processor, non-transitory memory, and one or more input devices. The method includes receiving, via one or more input devices, a user input selecting a file representing two-dimensional (2D) content and having a file type. The method includes receiving, via the one or more input devices, a user input requesting generation of a three-dimensional (3D) object based on the file. The method includes generating, based on the file type, a 3D object representing the 2D content.Type: ApplicationFiled: May 6, 2019Publication date: November 12, 2020Inventors: Tyler Casella, David Lui, Xiao Jin Yu, Kyle Ellington Fisher
-
Publication number: 20200357170Abstract: A spatial indexing system receives a sequence of images depicting an environment, such as a floor of a construction site, and performs a spatial indexing process to automatically identify the spatial locations at which each of the images were captured. The spatial indexing system also generates an immersive model of the environment and provides a visualization interface that allows a user to view each of the images at its corresponding location within the model.Type: ApplicationFiled: July 27, 2020Publication date: November 12, 2020Inventors: Michael Ben Fleischman, Philip DeCamp, Jeevan Kalanithi, Thomas Friel Allen
-
Publication number: 20200357171Abstract: In an example embodiment, techniques are provided for locking a region of fully-connected large-scale multi-dimensional spatial data (e.g., a large-scale 3-D mesh) defined by a bounding box. A region is associated with a lock state (e.g., exclusive or sharable). Clients may access the fully-connected large-scale multi-dimensional spatial data based on a comparison of the bounding box of the requested spatial data to the bounding boxes of other client's locks and their lock state.Type: ApplicationFiled: June 13, 2019Publication date: November 12, 2020Inventors: Elenie Godzaridis, Luc Robert, Jean-Philippe Pons, Stephane Nullans
-
Publication number: 20200357172Abstract: Described herein are methods and systems for three-dimensional (3D) object capture and object reconstruction using edge cloud computing resources. A sensor coupled to a mobile device captures (i) depth maps of a physical object, the depth maps including pose information, and (ii) color images of the object. An edge cloud device coupled to the mobile device via a 5G connection receives the depth maps and the color images. The edge cloud device generates a new 3D model of the object based on the depth maps and color images, when a 3D model of the object has not been generated. The edge cloud device updates an existing 3D model of the object based on the depth maps and color images, when a 3D model of the object has previously been generated. The edge cloud device transmits the new 3D model or the updated 3D model to the mobile device.Type: ApplicationFiled: May 5, 2020Publication date: November 12, 2020Inventors: Ken Lee, Xin Hou
-
Publication number: 20200357173Abstract: A method and apparatus for rendering a computer-generated image using a stencil buffer is described. The method divides an arbitrary closed polygonal contour into first and higher level primitives, where first level primitives correspond to contiguous vertices in the arbitrary closed polygonal contour and higher level primitives correspond to the end vertices of consecutive primitives of the immediately preceding primitive level. The method reduces the level of overdraw when rendering the arbitrary polygonal contour using a stencil buffer compared to other image space methods. A method of producing the primitives in an interleaved order, with second and higher level primitives being produced before the final first level primitives of the contour, is described which improves cache hit rate by reusing more vertices between primitives as they are produced.Type: ApplicationFiled: July 21, 2020Publication date: November 12, 2020Inventor: Simon Fenney
-
Publication number: 20200357174Abstract: Systems and methods for interactions between an autonomous vehicle and one or more external observers include virtual models of drivers the autonomous vehicle. The virtual models may be generated by the autonomous vehicle and displayed to one or more external observers, and in some cases using devices worn by the external observers. The virtual models may facilitate interactions between the external observers and the autonomous vehicle using gestures or other visual cues. The virtual models may be encrypted with characteristics of an external observer, such as the external observer's face image, iris, or other representative features. Multiple virtual models for multiple external observers may be simultaneously used for multiple communications while preventing interference due to possible overlap of the multiple virtual models.Type: ApplicationFiled: April 30, 2020Publication date: November 12, 2020Inventors: Debdeep BANERJEE, Ananthapadmanabhan Arasanipalai KANDHADAI
-
Publication number: 20200357175Abstract: Provided is a method, computer program product, and virtual reality system for applying an individualized risk tolerance threshold to external risks during a virtual reality simulation. A processor may receive event data from one or more devices communicatively coupled to a virtual reality device. The processor may compare the event data to a risk tolerance threshold specifically generated for a first user. In response to the risk tolerance threshold being met, the processor may push a notification to the virtual reality device indicating a potential risk to the first user has been determined.Type: ApplicationFiled: May 9, 2019Publication date: November 12, 2020Inventors: Zachary A. Silverstein, Trudy L. Hewitt, Jeremy R. Fox, Robert Huntington Grant
-
Publication number: 20200357176Abstract: Disclosed herein are systems, methods, and software for providing a virtual environment with enhanced visual textures and haptic detail. In some embodiments, a texture atlas and UV mapping is used to render virtual objects having multiple textures that can be manipulated in real time. In some cases, UV coordinates are used to provide enhanced haptic detail.Type: ApplicationFiled: May 10, 2019Publication date: November 12, 2020Inventors: Ian Hew CROWTHER, Victoria Jane SMALLEY