Patents Issued in April 30, 2020
  • Publication number: 20200134868
    Abstract: A gaze point determination method and apparatus, an electronic device, and a computer storage medium are provided. The method includes: obtaining two-dimensional coordinates of eye feature points of at least one eye of a face in an image, the eye feature points including an eyeball center area feature point; obtaining, in the preset three-dimensional coordinate system, three-dimensional coordinate of a corresponding eyeball center area feature point in a three-dimensional face model corresponding to the face in the image based on the obtained two-dimensional coordinate of the eyeball center area feature point; and obtaining a determination result for a position of a gaze point of the eye of the face in the image according to two-dimensional coordinates of feature points other than the eyeball center area feature point in the eye feature points and the three-dimensional coordinate of the eyeball center area feature point in the preset three-dimensional coordinate system.
    Type: Application
    Filed: December 29, 2019
    Publication date: April 30, 2020
    Applicant: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Tinghao LIU, Quan WANG, Chen QIAN
  • Publication number: 20200134869
    Abstract: In a system and a method for extrinsic calibration of an image capture system of a vehicle, the vehicle includes a body and a body portion, wherein the body portion is configured to rotate around a rotation axis relative to the vehicle. The system includes an image capture system with an image capture device mounted on the body portion and adapted to capture at least two images of the body and/or surrounding area of the vehicle, an identification unit adapted to identify at least two image features in the images, a calculation unit adapted to calculate a direction of the rotation axis relative to the image capture device based on the image features, and a calibration unit configured to determine extrinsic parameters of the image capture system based on the calculated direction of the rotation axis relative to the vehicle. The image capture system is calibrated using degrees of freedom of movement that the image capture device has due to being mounted on a movable portion of the vehicle.
    Type: Application
    Filed: August 27, 2019
    Publication date: April 30, 2020
    Inventors: David BAMBER, Lingjun GAO, Robin PLOWMAN
  • Publication number: 20200134870
    Abstract: An apparatus for calibrating a driver monitoring camera may include: a camera configured to capture an image of a driver's face; a control unit configured to receive the captured image from the camera, and detect the face to determine a face position; and a display unit configured to display the determination result of the face position by the control unit.
    Type: Application
    Filed: September 25, 2019
    Publication date: April 30, 2020
    Inventor: Kyu Dae BAN
  • Publication number: 20200134871
    Abstract: A camera parameter estimating device includes: a camera position posture acquisition unit acquiring estimated position and posture of a camera, based on a captured image acquired by the camera; a first vector calculation unit calculating a first vector corresponding to a traveling direction of the vehicle; a second vector calculation unit calculating a second vector corresponding to a normal direction of a plane corresponding to a road surface on which the vehicle travels; a third vector calculation unit calculating a third vector orthogonal to the first and second vectors; and a camera parameter estimation unit estimating an actual installation posture of the camera on the basis of the first, second and third vectors and the estimated posture of the camera when the estimated position moves along a direction indicated by the first vector.
    Type: Application
    Filed: October 23, 2019
    Publication date: April 30, 2020
    Applicant: AISIN SEIKI KABUSHIKI KAISHA
    Inventors: Kazutaka HAYAKAWA, Tomokazu SATO
  • Publication number: 20200134872
    Abstract: An automatic calibration method for an onboard camera is provided, the automatic calibration method includes: receiving a calibration start command, by a vehicle equipped with the onboard camera at a predetermined position; capturing an image of a target with the onboard camera, and calibrating a parameter of the onboard camera according to the captured image; and in a case that the onboard camera is calibrated, automatically writing the calibrated parameter of the onboard camera into a configuration file.
    Type: Application
    Filed: October 30, 2019
    Publication date: April 30, 2020
    Applicant: Baidu Online Network Technology (Beijing) Co., Ltd.
    Inventors: Yao Feng, Xianjin Zhuo, Zhipeng Zhou, Bing Li, Binglin Zhang
  • Publication number: 20200134873
    Abstract: This image processing device includes: an image acquisition unit configured to acquire a color image captured by a camera; a brightness determination unit configured to determine a brightness in an imaging range of the camera; a detection target color determination unit configured to determine, on the basis of a determination result by the brightness determination unit, detection target colors of two or more colors from among three or more colors provided to a label including regions of the three or more colors; and a label detection unit configured to detect the label by extracting, from the image acquired by the image acquisition unit, regions of the detection target colors determined by the detection target color determination unit.
    Type: Application
    Filed: May 24, 2018
    Publication date: April 30, 2020
    Applicant: SUMITOMO ELECTRIC INDUSTRIES, LTD.
    Inventors: Michikazu UMEMURA, Yuri KISHITA
  • Publication number: 20200134874
    Abstract: An artificial intelligence (AI) system utilizing machine learning algorithm and application of an electronic apparatus includes a memory and a processor to store at least one obtained image in the memory, and based on the at least one image being classified on a basis of an aesthetic score through an AI model, sort and provide the at least one image based on the classification result, and the AI model may include a plurality of layers with different depths, extract a feature of the at least one image from each of the plurality of layers, and classify the at least one image on a basis of the aesthetic score in accordance with the plurality of extracted features.
    Type: Application
    Filed: October 15, 2019
    Publication date: April 30, 2020
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Sanghee KIM, Kyeongwon MUN
  • Publication number: 20200134875
    Abstract: A person counting method and a person counting system are provided. The method includes extracting a group of person images to obtain a first image set; dividing the first image set into first and second subsets based on whether a related image exists in a second image set, and reusing a person ID of the related image; estimating posture patterns of images in the first subset, and storing the images in the first subset into an image library based on person IDs and the posture patterns; and selecting a target image whose similarity to an image in the second subset is highest from the image library, reusing a person ID of the target image when the similarity is greater than a threshold, and assigning a new person ID and incrementing a person counter by 1 when the similarity is not greater than the threshold.
    Type: Application
    Filed: October 17, 2019
    Publication date: April 30, 2020
    Applicant: Ricoh Company, Ltd.
    Inventors: Hong YI, Haijing JIA, Weitao GONG, Wei WANG
  • Publication number: 20200134876
    Abstract: A system for generating simulated body parts for images may include a body part recognition convolutional neural network (CNN) to recognize a body part in an input image. The body part recognition CNN may be trained using first training data including training images including body parts contained in the input image being identified. The system may also include a body part generative adversarial network (GAN) to complete an image of the body part in the input image based on a body part identification output by the body part recognition CNN. The body part GAN may be trained using second training data including at least partial training images.
    Type: Application
    Filed: October 30, 2018
    Publication date: April 30, 2020
    Inventors: Sun Young Park, Dusty Sargent
  • Publication number: 20200134877
    Abstract: The systems described in this disclosure can be used in construction settings to facilitate the tasks being performed. The location of projectors and augmented reality headsets can be calculated and used to determine what images to display to a worker, based on a map of work to be performed, such as a construction plan. Workers can use spatially-aware tools to make different locations be plumb, level, or equidistant with other locations. Power to tools can be disabled if they are near protected objects.
    Type: Application
    Filed: November 13, 2019
    Publication date: April 30, 2020
    Inventors: Samuel A. Gould, Kellen Carey, Michael John Caelwaerts, Gareth J. Mueckl, Christopher S. Hoppe, Benjamin T. Jones
  • Publication number: 20200134878
    Abstract: A computer-implemented method includes receiving a base visualization having first data in a first set of channels, where each channel in the first set of channels is associated with a respective range in the base visualization. It is detected that the respective ranges of the first set of channels fall outside a perceptual bandwidth of a first user. The base visualization is automatically transformed to a second visualization, based on the perceptual bandwidth of the first user. The second visualization includes second data in a second set of channels, where each channel in the second set of channels is associated with a respective range in the second visualization. The respective ranges of the second set of channels fall within the perceptual bandwidth of the first user.
    Type: Application
    Filed: October 31, 2018
    Publication date: April 30, 2020
    Inventors: Jonathan Fry, Victoria A. Nwobodo, KAM HOU U, Michael Lapointe
  • Publication number: 20200134879
    Abstract: In one embodiment, a method for determining the color for a sample location includes using a computing system to determine a sampling location within a texture that comprises a plurality of texels. Each texel may encode a distance field and a color index. The system may select, based on the sampling location, a set of texels in the plurality of texels to use to determine a color for the sampling location. The system may compute an interpolated distance field based on the distance fields of the set of texels. The system may select, based on the interpolated distance field, a subset of the set of texels. The system may select a texel from the subset of texels based on a distance between the texel and the sampling location. The system may then determine the color for the sampling location using the color index of the selected texel.
    Type: Application
    Filed: September 26, 2019
    Publication date: April 30, 2020
    Inventors: Larry Seiler, Alexander Nankervis, John Adrian Arthur Johnston
  • Publication number: 20200134880
    Abstract: In one embodiment, a method for computing a color value for a sampling pixel region includes using a computing system to determine a sampling pixel region within a texture. The texture is associated with mipmap levels having different resolutions of the texture. The mipmap levels include at least a first mipmap level defined by color texels and a second mipmap level defined by distance-field texels. The system may select one of the mipmap levels based on a size of the sampling pixel region and a size of a texel in the selected mipmap level. The system may then compute a color value for the sampling pixel region using the selected mipmap level.
    Type: Application
    Filed: September 26, 2019
    Publication date: April 30, 2020
    Inventors: Larry Seiler, Alexander Nankervis, John Adrian Arthur Johnston
  • Publication number: 20200134881
    Abstract: In one embodiment, a system may determine a sampling location within a texture with each texel encoding first and second distance fields and first and second color indices. The system may select, based on the sampling location, a set of texels to use to determine a color for the sampling location. The system may compute first and second interpolated distance fields based on, respectively, the first and second distance fields of the set of texels. The system may select, based on the first interpolated distance field, a subset of the set of texels, and select a texel from the subset of texels based on a distance between the texel and the sampling location. The system may select, based on the second interpolated distance filed, a color index from the first and second color indices of the selected texel and use it to determine the color for the sampling location.
    Type: Application
    Filed: September 26, 2019
    Publication date: April 30, 2020
    Inventors: Larry Seiler, Alexander Nankervis, John Adrian Arthur Johnston
  • Publication number: 20200134882
    Abstract: An image processing apparatus according to the present invention includes at least one memory and at least one processor Which function as: a setting unit configured to he capable of setting any of a plurality of processing modes which include a first processing mode to display an image having a first brightness range and a second processing mode to display an image having a second brightness range which is narrower than the first brightness range and a processing unit configured to generate frame image data, which is a display target, in a first state in which image processing in accordance with a currently set processing mode is performed, and generates a capture image data corresponding to the frame image data in a second state after changing the first state in accordance with the currently set processing mode.
    Type: Application
    Filed: October 22, 2019
    Publication date: April 30, 2020
    Inventors: Masahiro Sato, Hirofumi Urabe
  • Publication number: 20200134883
    Abstract: Methods and systems are provided for reconstructing images with a tailored image texture. In one embodiment, a method comprises acquiring projection data, and reconstructing an image from the projection data with a desired image texture. In this way, iterative image reconstruction techniques may be used to substantially reduce image noise, thereby enabling a reduction in injected contrast and/or radiation dose, while preserving an image texture familiar from analytic image reconstruction techniques.
    Type: Application
    Filed: October 31, 2018
    Publication date: April 30, 2020
    Inventors: Brian Edward Nett, Jiahua Fan, Amber Christine Silvaggio
  • Publication number: 20200134884
    Abstract: An image display apparatus includes a depth map creating unit that creates, on the basis of a two-dimensional radiation image and a plurality of tomographic images for the same subject, a plurality of depth maps in which each position on the two-dimensional radiation image and depth information indicating a depth directional position of a tomographic plane corresponding to each position are associated with each other while changing a correspondence relationship between each position on the two-dimensional radiation image and the depth information.
    Type: Application
    Filed: October 23, 2019
    Publication date: April 30, 2020
    Inventor: Junya MORITA
  • Publication number: 20200134885
    Abstract: The present disclosure provides a method for processing projection data. The method may include obtaining a first image generated by performing a first scan to a subject by a first imaging device; determining first projection data based on the first image, the first projection data corresponding to a first area of the subject; obtaining second projection data by performing a second scan of the subject using a second imaging device, the second projection data corresponding to a second area of the subject, the first area at least partially overlapping with the second area in an overlapping area; determining registered first projection data by registering the first projection data to the second projection data with respect to the overlapping area; determining scatter component based on the registered first projection data and the second projection data, the scatter component including low-frequency scattered radiation signals.
    Type: Application
    Filed: December 25, 2018
    Publication date: April 30, 2020
    Applicant: SHANGHAI UNITED IMAGING HEALTHCARE CO., LTD.
    Inventors: Hongcheng YANG, Jonathan MALTZ
  • Publication number: 20200134886
    Abstract: A respiratory motion estimation method (30) includes reconstructing emission imaging data (22) to generate a reconstructed image (50). The emission imaging data comprises lines of response (LORs) acquired by a positron emission tomography (PET) imaging device or projections acquired by a gamma camera. One or several assessment volumes (66) are defined within the reconstructed images. The emission imaging data are binned into time interval bins based on time stamps of the LORs or projections. A displacement versus time curve (70) is generated by computing, for each time interval bin, a statistical displacement metric of the LORs or projections that both are binned in the time interval bin and intersect the motion assessment volume. The motion assessment volume may be selected to overlap a motion assessment image feature (60) identified in the reconstructed image.
    Type: Application
    Filed: March 19, 2018
    Publication date: April 30, 2020
    Inventors: Benjamin TSUI, Patrick OLIVIER
  • Publication number: 20200134887
    Abstract: A magnetic resonance imaging scan performs an MRI acquisition using an undersampling pattern to produce undersampled k-space data; adds the undersampled k-space data to aggregate undersampled k-space data for the scan; reconstructs an image from the aggregate undersampled k-space data; updates the undersampling pattern from the reconstructed image and aggregate undersampled k-space data using a deep reinforcement learning technique defined by an environment, reward, and agent, where the environment comprises an MRI reconstruction technique, where the reward comprises an image quality metric, and where the agent comprises a deep convolutional neural network and fully connected layers; and repeats these steps to produce a final reconstructed MRI image for the scan.
    Type: Application
    Filed: October 16, 2019
    Publication date: April 30, 2020
    Inventors: David Y. Zeng, Shreyas S. Vasanawala, Joseph Yitan Cheng
  • Publication number: 20200134888
    Abstract: Systems and methods are provided for imaging that demonstrably outperform previous approaches (especially compressive sensing based approaches). Embodiments of the present disclosure provide and solve an imaging cost function via a stochastic approximation approach. By doing so, embodiments of the preset disclosure provide a significant means of generalization and flexibility to adapt to different application domains while being competitive in terms of computational complexity.
    Type: Application
    Filed: October 29, 2019
    Publication date: April 30, 2020
    Inventors: Raghu G. Raj, John Mckay, Vishal Monga
  • Publication number: 20200134889
    Abstract: A method includes receiving a plurality of voxel values corresponding to respective locations in a heart, which are acquired using magnetic resonance imaging (MRI). Voxel values that, in spite of (i) corresponding to a same location in the heart and (ii) being gated to a same phase of an electrocardiogram (ECG) cycle of the heart, differ by more than a predefined difference, are identified. An image of at least a portion of the heart is reconstructed from the plurality of voxel values excluding at least the identified voxel values.
    Type: Application
    Filed: October 31, 2018
    Publication date: April 30, 2020
    Applicant: Biosense Webster (Israel) Ltd.
    Inventor: Assaf Govari
  • Publication number: 20200134890
    Abstract: The present disclosure provides drawing content processing method and device for a terminal apparatus. The method includes positioning a drawing focus of a user based on a drawing operation of the user and processing drawing content displayed on a screen of the terminal apparatus based on the positioned drawing focus.
    Type: Application
    Filed: August 14, 2019
    Publication date: April 30, 2020
    Inventors: Weihua ZHANG, Shilei LIU
  • Publication number: 20200134891
    Abstract: Perception of the relationship between a comfort level and environmental data is facilitated, and appropriate management of air-conditioning equipment is enabled.
    Type: Application
    Filed: June 28, 2018
    Publication date: April 30, 2020
    Applicant: MITSUBISHI ELECTRIC CORPORATION
    Inventors: Yoshihiro OHTA, Natsumi TAMURA, Kenji SATO, Satoko TOMITA, Kazuyuki NAGAHIRO, Kazuo TOMISAWA, Takayoshi IIDA, Hiroyuki YASUDA, Yoshinori NAKAJIMA
  • Publication number: 20200134892
    Abstract: A method is used in processing graphics in computing environments. A user interface layer receives a request from a user to rasterize an interactive image rendered in a user interface. A rasterizing module rasterizes the interactive image at the user interface layer associated with the user interface. The rasterizing module transmits the rasterized image to a reporting service for reporting out the rasterized image.
    Type: Application
    Filed: October 24, 2018
    Publication date: April 30, 2020
    Inventors: Rakesh Ram Mohan Maddala, Timothy Ramamurthy, Rameshkrishnan Subramanian
  • Publication number: 20200134893
    Abstract: Systems and methods for facilitating product design by an electronic device are described. According to certain aspects, an electronic device may display one or more designable canvases of a user interface for editing the visual design of a product. The electronic device may display an extended workspace canvas adjacent to and at least partially surrounding the one or more designable canvases. The extended workspace canvas may be configured to display a plurality of design elements that can be reached directly from the one or more designable canvases for use. Each of the design elements may be configured to be selectively positioned within the one or more designable canvases. The electronic device may, in response to a surface change selection of the one or more designable canvases, maintain the displaying of the extended workspace canvas while the one or more designable canvases are in a transitioning state.
    Type: Application
    Filed: October 26, 2018
    Publication date: April 30, 2020
    Inventors: Nicholas Richard Swider, Edward James Hammond, Christina Kayastha, Donald J. Naylor
  • Publication number: 20200134894
    Abstract: When processing a primitive when generating a render output in a graphics processor, the vertices for the primitive are loaded by a vertex loader, but before a primitive setup stage generates per-primitive data for the primitive using the loaded vertices for the primitive, an early culling test is performed for the primitive using data of the loaded vertices for the primitive. When the primitive passes the early culling test, the primitive is sent onwards to the primitive setup stage and to a rasteriser for rasterising the primitive, but when the primitive fails the early culling test, it is discarded from further processing at the early culling test.
    Type: Application
    Filed: October 21, 2019
    Publication date: April 30, 2020
    Applicant: Arm Limited
    Inventor: Olof Henrik Uhrenholt
  • Publication number: 20200134895
    Abstract: The disclosed computer-implemented method may include receiving an indication of a reference elevation representing a plane of a real-world environment and establishing, with respect to the reference elevation, a virtual boundary for a virtual-world environment. The method may include receiving a request from a user to modify the virtual boundary and in response to the request from the user, monitoring an orientation of a direction indicator to generate orientation data. The method may also include modifying the virtual boundary based on the reference elevation and the orientation data. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Application
    Filed: October 31, 2018
    Publication date: April 30, 2020
    Inventors: John Pollard, Jimmy K. Yun, Jason Dong UK Kim
  • Publication number: 20200134896
    Abstract: Localization apparatuses and methods are disclosed where a localization apparatus extracts a feature of an object from an input image, generates an image in which the object is projected with respect to localization information of a device based on map data, and evaluates the localization information based on feature values corresponding to vertices included in a projection image.
    Type: Application
    Filed: April 1, 2019
    Publication date: April 30, 2020
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Hyun Sung CHANG, Minjung SON, Donghoon SAGONG, Wonhee LEE, Kyungboo JUNG
  • Publication number: 20200134897
    Abstract: Systems and methods are disclosed for controlling image annotation. One method includes acquiring a digital representation of image data and generating a set of image annotations for the digital representation of the image data. The method also may include determining an association between members of the set of image annotations and generating one or more groups of members based on the association. A representative annotation from the one or more groups may also be determined, presented for selection, and the selection may be recorded in memory.
    Type: Application
    Filed: December 23, 2019
    Publication date: April 30, 2020
    Inventors: Leo GRADY, Michiel SCHAAP
  • Publication number: 20200134898
    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and a method for rendering an avatar. A first avatar having a first level of detail is stored in a database, the first avatar including a first plurality of components. A level of detail of each of the first plurality of components is reduced separately from each other. A second plurality of components comprising the reduced level of detail of each of the first plurality of components is stored. A request is received for the first avatar in a second level of detail that comprises a lower level of detail than the first level of detail. In response to receiving the request, the second plurality of components is assembled to generate a second avatar having the second level of detail.
    Type: Application
    Filed: October 31, 2018
    Publication date: April 30, 2020
    Inventors: Rahul Bhupendra Sheth, Maoning Guo, William Eastcott
  • Publication number: 20200134899
    Abstract: A method is provided, including the following operations: receiving, from a controller device, controller input that identifies postures of at least two fingers of the user's hand; determining a similarity of the controller input to a predefined target input; rendering in a virtual space a virtual hand that corresponds to the controller device, wherein when the similarity exceeds a predefined threshold, then, in response the virtual hand is animated so that a pose of the virtual hand transitions to a predefined hand pose, such that postures of fingers of the virtual hand transition to predefined finger postures of the predefined hand pose, and wherein when the similarity does not exceed the predefined threshold, then the virtual hand is rendered so that the pose of the virtual hand dynamically changes in response to changes in the controller input.
    Type: Application
    Filed: December 31, 2019
    Publication date: April 30, 2020
    Inventor: Yutaka Yokokawa
  • Publication number: 20200134900
    Abstract: Methods and systems are provided for generating an animated display for an aircraft. The method comprises receiving a user request for a change in a layer shown on a display that shows a view of the flight plan data for the aircraft. The user request is received by a user interface (UI) that is part of a map layer display system located onboard the aircraft. The system determines which specific layer corresponds to user request for a change to the display. When the user request is to add the layer to the display, the opaqueness of the layer increases from zero percent to one-hundred percent. When the user request is to remove the layer from the display, the opaqueness of the layer decreases from one-hundred percent to zero percent. Finally, the system generates instructions to display the layer at the opaqueness on the display.
    Type: Application
    Filed: July 29, 2019
    Publication date: April 30, 2020
    Applicant: HONEYWELL INTERNATIONAL INC.
    Inventors: Pramod Kumar Malviya, Thea Feyereisen, Gang He, Rui Wang
  • Publication number: 20200134901
    Abstract: A medical image viewing system allows multiple person viewing of rendered medical images, the medical image viewing system. A medical image processing system receives medical image data and manipulates the medical image data to produce first rendered medical image data that includes the medical image data plus image manipulation results. A first client system receives the first rendered medical image data from the medical image processing system. The first client system displays to a first end user on the first client system, a first 3D medical image that is based on the first rendered medical image data. The first 3D medical image is displayed within an augmented reality environment or a virtual reality environment so that within the augmented reality environment or the virtual reality environment the first end user has a first viewer location and a first viewing angle of the first 3D medical image.
    Type: Application
    Filed: December 31, 2019
    Publication date: April 30, 2020
    Inventors: Gael Kuhn, Tiecheng Zhao, David J.G. Guigonis, Jeffrey L. Sorenson, David W. MacCutcheon
  • Publication number: 20200134902
    Abstract: In one embodiment, a method for determining the color for a sampling location may include using a computing system to determine a sampling location within a texture that comprises a plurality of texels. Each texel may encode a distance field indicating a distance between the texel and an edge depicted in the texture and an indicator indicating whether the texel is on a first predetermined side of the edge or a second predetermined side of the edge. The system may select, based on the sampling location, a set of texels in the plurality of texels to use to determine a color for the sampling location. The system may determine that the set of texels have indicators that are the same. The system may then determine, using the indicator of any texel in the set of texels, the color for the sampling location.
    Type: Application
    Filed: September 26, 2019
    Publication date: April 30, 2020
    Inventor: Larry Seiler
  • Publication number: 20200134903
    Abstract: Systems and methods for rendering vector data in conjunction with a three-dimensional model are provided. In particular, a smooth transparent draping layer can be generated and rendered overlaying the three-dimensional model. The vector data can be texture mapped to the smooth transparent draping layer such that the vector data appears to be located along a surface in the three-dimensional model. The three-dimensional model can be a model of a geographic area and can include terrain geometry that models the terrain of the geographic area and building geometry that models buildings, bridges, and other objects in the geographic area. The smooth transparent draping layer can conform to the surfaces defined by the terrain geometry. The vector data can be texture mapped to the smooth transparent draping layer such that the vector data appears to be located along the surface of the terrain geometry but can be occluded by the building geometry.
    Type: Application
    Filed: December 20, 2019
    Publication date: April 30, 2020
    Inventors: Ryan Styles Overbeck, Janne Kontkanen
  • Publication number: 20200134904
    Abstract: Methods and systems for data visualization include comparing a random value to opacity values along a plurality of rays to determine a stopping point for each ray in a three-dimensional dataset. An expected brightness is determined for each ray based on a brightness value at the stopping point of each ray. An image is generated that visualizes the three-dimensional dataset based on the expected brightness for each ray.
    Type: Application
    Filed: October 25, 2018
    Publication date: April 30, 2020
    Inventor: Kun Zhao
  • Publication number: 20200134905
    Abstract: A computer-implemented method and system provide the ability to draw a feature line on an image. An image is acquired and is represented as a set of pixel samples for every pixel in the image, with each pixel sample including one or more feature attributes. One or more features are detected based on differences between the feature attributes. The detected features are represented by pixel samples. A color of the pixel samples representing the detected features is altered. The image is rendered based on the pixel samples including the altered color, and the feature line includes the altered color.
    Type: Application
    Filed: October 28, 2019
    Publication date: April 30, 2020
    Applicant: Autodesk, Inc.
    Inventors: Shinji Ogaki, Iliyan Georgiev
  • Publication number: 20200134906
    Abstract: Examples described herein generally relate to generating a visualization of an image. A proprietary structure that specifies ray tracing instructions for generating the image using ray tracing is intercepted from a graphics processing unit (GPU) or a graphics driver. The proprietary structure can be converted, based on assistance information, to a visualization structure for generating the visualization of the image. The visualization of the image can be generated from the visualization structure.
    Type: Application
    Filed: December 31, 2019
    Publication date: April 30, 2020
    Inventors: Austin Neil KINROSS, Shawn Lee Hargreaves, Amar Patel, Thomas Lee Davidson
  • Publication number: 20200134907
    Abstract: Computer based methods are provided for displaying an image or video. The methods are usable for displaying a virtual space to a viewer of a video, where the video was originally generated using a virtual environment. For example, when a streamer streams gameplay of a video game that occurs in a virtual environment, the method allows such a video streamed to be presented to a third-party viewer, such as a stream viewer, as a virtual environment.
    Type: Application
    Filed: October 26, 2018
    Publication date: April 30, 2020
    Inventor: Aaron Bradley Epstein
  • Publication number: 20200134908
    Abstract: In various embodiments, a training application generates a trained encoder that automatically generates shape embeddings having a first size and representing three-dimensional (3D) geometry shapes. First, the training application generates a different view activation for each of multiple views associated with a first 3D geometry based on a first convolutional neural network (CNN) block. The training application then aggregates the view activations to generate a tiled activation. Subsequently, the training application generates a first shape embedding having the first size based on the tiled activation and a second CNN block. The training application then generates multiple re-constructed views based on the first shape embedding. The training application performs training operation(s) on at least one of the first CNN block and the second CNN block based on the views and the re-constructed views to generate the trained encoder.
    Type: Application
    Filed: October 29, 2018
    Publication date: April 30, 2020
    Inventors: Thomas DAVIES, Michael HALEY, Ara DANIELYAN, Morgan FABIAN
  • Publication number: 20200134909
    Abstract: In various embodiments, a training application generates a trained encoder that automatically generates shape embeddings having a first size and representing three-dimensional (3D) geometry shapes. First, the training application generates a different view activation for each of multiple views associated with a first 3D geometry based on a first convolutional neural network (CNN) block. The training application then aggregates the view activations to generate a tiled activation. Subsequently, the training application generates a first shape embedding having the first size based on the tiled activation and a second CNN block. The training application then generates multiple re-constructed views based on the first shape embedding. The training application performs training operation(s) on at least one of the first CNN block and the second CNN block based on the views and the re-constructed views to generate the trained encoder.
    Type: Application
    Filed: October 29, 2018
    Publication date: April 30, 2020
    Inventors: Thomas DAVIES, Michael HALEY, Ara DANIELYAN, Morgan FABIAN
  • Publication number: 20200134910
    Abstract: An apparatus for generating an image comprises a receiver (101) which receives 3D image data providing an incomplete representation of a scene. A receiver (107) receives a target view vector indicative of a target viewpoint in the scene for the image and a reference source (109) provides a reference view vector indicative of a reference viewpoint for the scene. A modifier (111) generates a rendering view vector indicative of a rendering viewpoint as a function of the target viewpoint and the reference viewpoint for the scene. An image generator (105) generates the image in response to the rendering view vector and the 3D image data.
    Type: Application
    Filed: June 27, 2018
    Publication date: April 30, 2020
    Applicant: KONINKLIJKE PHILIPS N.V.
    Inventors: BART KROON, WIEBE DE HAAN
  • Publication number: 20200134911
    Abstract: An exemplary three-dimensional (3D) simulation system accesses a two-dimensional (2D) video image captured by a video capture device and that depicts a bounded real-world scene and a real-world object present within the bounded real-world scene. The 3D simulation system accesses respective 3D models of the bounded real-world scene and the real-world object. Based on the 2D video image, the 3D simulation system tracks a spatial characteristic of the real-world object relative to the bounded real-world scene. Based on the tracked spatial characteristic of the real-world object and the 3D models of the bounded real-world scene and the real-world object, the 3D simulation system generates a 3D simulation of the bounded real-world scene within which the real-world object is simulated in accordance with the tracked spatial characteristic of the real-world object. Corresponding methods and systems are also disclosed.
    Type: Application
    Filed: October 28, 2019
    Publication date: April 30, 2020
    Inventors: Arthur van Hoff, Daniel Kopeinigg, Philip Lee, Solmaz Hajmohammadi, Sourabh Khire, Simion Venshtain
  • Publication number: 20200134912
    Abstract: A three-dimensional (3D) image rendering method for a heads-up display (HUD) system including a 3D display apparatus and a catadioptric system is provided. The 3D image rendering method includes determining optical images corresponding to both eyes of a user by applying, to each of the positions of the eyes, an optical transformation that is based on an optical characteristic of the catadioptric system, and rendering an image to be displayed on a display panel included in the 3D display apparatus, based on a position relationship between the optical images and the display panel.
    Type: Application
    Filed: December 26, 2019
    Publication date: April 30, 2020
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Juyong PARK, Dong Kyung NAM, Kyuhwan CHOI
  • Publication number: 20200134913
    Abstract: Methods and devices for rendering graphics in a computer system include a graphical processing unit (GPU) with a flexible, dynamic, application-directed mechanism for varying the rate at which fragment shading is performed for rendering an image to a display. In particular, the described aspects include determining, at a rasterization stage, map coordinates based on coarse scan converting a primitive of an object, the map coordinates indicating a location on a sampling rate parameter (SRP) map of a fragment within the primitive of the object, and identifying a lookup value for the fragment within the primitive of the object based at least on map coordinates, and calculating a respective fragment variable SRP value for the fragment within the primitive of the object based at least on the lookup value.
    Type: Application
    Filed: December 30, 2019
    Publication date: April 30, 2020
    Inventors: Ivan NEVRAEV, Martin J. I. FULLER, Mark S. GROSSMAN, Jason M. GOULD
  • Publication number: 20200134914
    Abstract: A topographic feature estimation processing that includes: classifying a plurality of measurement points, acquired by three-dimensional measurement of a scene and respectively including measurement information, into a plurality of point group sub-regions, each of which corresponds to a respective one of the plurality of classification vectors; and estimating topographic features of the scene by: for each of the plurality of point group sub-regions that have been classified for each of the measurement points included in the point group sub-region corresponding to a reference plane, taking a distance from the reference plane to each of the measurement points as a height of each of the measurement points, and by applying a progressive morphological filter to each of the plurality of point group sub-regions, removing a measurement point corresponding to a non-ground object from the plurality of measurement points acquired by the three-dimensional measurement.
    Type: Application
    Filed: October 2, 2019
    Publication date: April 30, 2020
    Applicant: FUJITSU LIMITED
    Inventor: Hiroshi HIDAKA
  • Publication number: 20200134915
    Abstract: A system for constructing an urban design digital sand table, the system includes the following modules. A sand table environment constructing module, configured to construct a digital environment of the urban design sand table. An element grading display module, configured to perform hierarchical management on urban design elements, and perform visual hierarchical display. A spatial indicator interpretation module, configured to articulate names, algorithms, and attributes for indicators in an indicator library of the urban design digital sand table. A spatial calculation tool module, configured to load a toolkit to calculate a selected range in the digital sand table. An offline data extraction module, configured to extract the data of the digital sand table so as to export two-dimensional or three-dimensional spatial data in an offline mode. A dynamic real-time editing module, configured to perform real-time editing operation on an urban digital design scheme loaded in the system.
    Type: Application
    Filed: June 6, 2018
    Publication date: April 30, 2020
    Applicant: SOUTHEAST UNIVERSITY
    Inventors: Junyan YANG, Beixiang SHI, Jun CAO, Tanhua JIN
  • Publication number: 20200134916
    Abstract: A collaborative 3D modeling system, comprising a computer processing unit, a digital memory, and an electronic display, the computer processing unit and the digital memory configured to provide 3D model representations of a first plurality of versions of an object component for a first user, the versions being selectable along a first axis, and using the electronic display, provide a plurality of user identifications which are selectable along a second axis, wherein selecting a subsequent user causes a second plurality of said versions of said object component to be displayed on the electronic display.
    Type: Application
    Filed: December 24, 2019
    Publication date: April 30, 2020
    Applicant: Purdue Research Foundation
    Inventors: Cecil Piya, Vinayak Raman Krishnamurthy, Karthik Ramani
  • Publication number: 20200134917
    Abstract: Systems and methods for rendering 3D assets associated and/or configured as stacked meshes are disclosed. Stacking meshes can include loading the first mesh and the second mesh from a character definition, identifying a lowest depth mesh from the first mesh and the second mesh, identifying shared polygons from the first mesh and the second mesh, and hiding the shared polygons of the lowest depth mesh.
    Type: Application
    Filed: December 30, 2019
    Publication date: April 30, 2020
    Inventors: Jesse Janzer, Jon Middleton, Berkley Frei