Patents Issued in April 14, 2022
-
Publication number: 20220114736Abstract: The present disclosure relates to a method of motion segmentation (100) in a video stream. The method comprises the steps of: acquiring (101) a sequence of image frames; dividing (102) a first frame (401) into a plurality of image blocks (403); comparing (103) each image block (403) against a corresponding reference image block (404) and providing a measure of dissimilarity; for image blocks having a measure of dissimilarity less than a threshold: discarding (104a) the image blocks, and for image blocks having a measure of dissimilarity greater than the threshold: keeping (104b) the image blocks and further dividing the image blocks into a new plurality of image blocks (405); repeating the steps of dividing (102) and comparing (103) until a stop condition is met (105a); generating (106) a motion mask (407) indicating areas of movement (408).Type: ApplicationFiled: September 27, 2021Publication date: April 14, 2022Applicant: Axis ABInventors: Jimmie JÖNSSON, Johan Jeppsson KARLIN
-
Publication number: 20220114737Abstract: A method for measuring angular velocity and angular acceleration based on monocular vision. Firstly, a movement sequence image of a feature mark fixed on a working table of a rotary motion generating device is acquired via an acquisition and imaging device. Secondly, a region of interest on the movement sequence image of the feature mark under different shooting distances and rotating conditions is determined by cyclic matching between a set of circular templates and the movement sequence image of the feature mark. Then, a sub-pixel of feature line edges in the region of interest is extracted using a line segment detection method, and only the feature line edges in a motion direction are retained through a constraint of the number of edge points. Finally, the angular velocity and angular acceleration are calculated by using the extracted feature line edges in the motion direction.Type: ApplicationFiled: December 20, 2021Publication date: April 14, 2022Inventors: Ming YANG, Chenguang CAI, Zhihua LIU, Qi LYU, Wenfeng LIU, Ping YANG
-
Publication number: 20220114738Abstract: For detecting movements of a sample with respect to an objective, the sample is imaged onto an image sensor comprising an array of pixels by means of the objective. Images of the sample are recorded in that light coming from the sample is registered at the pixels. Variations of intensities of the light coming from the sample and registered at the pixels are determined during a set-up period in that a temporal course of the intensity of the light, which has been registered at a respective one of the pixels over the set-up period, is analyzed. Using these variations as a criterion, a subset of not more than 90% of the pixels of the image sensor is selected. Parts of the images that each correspond to the selected subset are compared to parts of at least one reference image that also correspond to the subset.Type: ApplicationFiled: October 13, 2021Publication date: April 14, 2022Inventor: Roman Schmidt
-
Publication number: 20220114739Abstract: Embodiments described herein provide various examples of real-time visual object tracking. In another aspect, a process for performing a local re-identification of a target object which was earlier detected in a video but later lost when tracking the target object is disclosed. This process begins by receiving a current video frame of the video and a predicted location of the target object. The process then places a current search window in the current video frame centered on or in the vicinity of the predicted location of the target object. Next, the process extracts a feature map from an image patch within the current search window. The process further retrieves a set of stored feature maps computed at a set of previously-determined locations of the target object from a set of previously-processed video frames in the video. The process next computes a set of correlation maps between the feature map and each of the set of stored feature maps.Type: ApplicationFiled: December 21, 2021Publication date: April 14, 2022Applicant: AltumView Systems Inc.Inventors: Yu Gao, Xing Wang, Rui Ma, Chao Shen, Minghua Chen, Jie Liang, Jianbing Wu
-
Publication number: 20220114740Abstract: An electronic device and method for three-dimensional (3D) reconstruction based on camera motion information is provided. The electronic device receives a set of images of a three-dimensional (3D) physical space captured by one or more image sensors. The electronic device further receives metadata associated with each of the set of images. The metadata may include at least motion information associated with the one or more image sensors that captured the set of images. The electronic device applies a neural network model on the received metadata. The electronic device determines a first set of images from the received set of images based on the application of the neural network model on the received metadata. The electronic device constructs a 3D model of a subject associated with the 3D physical space based on the determined first set of images.Type: ApplicationFiled: May 24, 2021Publication date: April 14, 2022Inventors: HIDEYUKI SHIMIZU, JAMES KUCH, NIKOLAOS GEORGIS
-
Publication number: 20220114741Abstract: A material data collection system allows capturing of material data. For example, the material data collection system may include digital image data for materials. The material data collection system may ensure that captured digital image data is properly aligned, so that material data may be easily recalled for later use, while maintaining the proper alignment for the captured digital image. The material data collection system may include using a capture guide, to provide cues on how to orient a mobile device used with the material data collection system.Type: ApplicationFiled: September 3, 2021Publication date: April 14, 2022Inventors: Humberto Roa, Rammohan Akula, Fabrice Canonge, Nicholas Fjellberg Swerdlowe, Rohit Ghatol, Grif Von Holst
-
Publication number: 20220114742Abstract: An apparatus includes an acquisition unit configured to acquire a plurality of captured images of a target object imaged under a plurality of different conditions, a first calculation unit configured to calculate a first reflection characteristic of the target object for each pixel position of the captured images using a first spatial resolution based on the captured images, a determination unit configured to determine whether an angular resolution of the calculated first reflection characteristic is lower than a first threshold value, and a second calculation unit configured to calculate a second reflection characteristic of the target object using a second spatial resolution lower than the first spatial resolution based on the calculated first reflection characteristic in a case where the angular resolution is lower than the first threshold value.Type: ApplicationFiled: September 30, 2021Publication date: April 14, 2022Inventor: Atsushi Totsuka
-
Publication number: 20220114743Abstract: An image processing method, comprising: in response to a request instruction of an image application function, transferring acquired emission parameters to the emission driver module, and controlling the infrared camera to transmit a trigger signal to the emission driver module, when detecting the request instruction of the image application function; controlling the structured-light emitter to emit a laser by the emission driver module, and transmitting a synchronization signal to the infrared camera by the emission driver module, in response to the trigger signal; controlling the infrared camera to collect speckle images of a to-be-detected object, in response to the synchronization signal; controlling the infrared camera to transfer the speckle images to the image processing module; controlling the image processing module to acquire depth images of the to-be-detected object by performing depth calculation on the speckle images, and realize the image application function based on the depth images.Type: ApplicationFiled: December 22, 2021Publication date: April 14, 2022Inventor: Lu WANG
-
Publication number: 20220114744Abstract: Provided are a depth data filtering method and apparatus, an electronic device, and a readable storage medium. The method includes: obtaining, for each pixel, a depth difference value between two consecutive frames of depth maps; marking an area formed by pixels as a first environment change area, the depth difference value of the pixels is smaller than a predetermined absolute depth deviation; marking an area formed by pixels as a second environment change area, the depth difference value of the pixels is greater than or equal to the predetermined absolute depth deviation; respectively filtering the first environment change area and the second environment change area.Type: ApplicationFiled: December 22, 2021Publication date: April 14, 2022Applicant: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.Inventor: Jian KANG
-
Publication number: 20220114745Abstract: An example operation of depth map generation includes one or more of, simultaneously capturing a main-off camera image and an auxiliary-off camera image with an unpowered flash, sparse depth mapping an object based on the main-off camera image and the auxiliary-off camera image, capturing a main-on camera image with a powered flash, foreground probability mapping the object based on the main-off camera image and the main-on camera image and dense depth mapping the object based on the sparse depth map and the foreground probability map.Type: ApplicationFiled: October 12, 2020Publication date: April 14, 2022Inventors: Chao Wang, Donghui Wu
-
Publication number: 20220114746Abstract: This application relates to the field of pose detection technologies, and discloses a method and an apparatus for obtaining pose information, a method and an apparatus for determining symmetry of an object, and a storage medium. The method includes: obtaining a rotational symmetry degree of freedom of a target object (901), obtaining pose information of the target object (902), and adjusting the pose information of the target object based on the rotational symmetry degree of freedom to obtain adjusted pose information (903), where the adjusted pose information is used for displaying a virtual object, and the virtual object is an object associated with the target object.Type: ApplicationFiled: December 22, 2021Publication date: April 14, 2022Applicant: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Er Li, Bo Zheng, Jianbin Liu, Jun Cao
-
Publication number: 20220114747Abstract: Disclosed herein are apparatuses and methods for iteratively mapping a layout of an environment. The implementations include receiving a visual stream from a camera installed in the environment, wherein the visual stream depicts a view of the environment, and wherein positional parameters of the camera and dimensions of the environment are set to arbitrary values. The implementations include monitoring a plurality of persons in the visual stream. For each person in the plurality of persons, the implementations further includes identifying a respective path that the person moves along in the view, updating the dimensions of the environment captured in the view, based on an estimated height of the person and movement speed along the respective path, and updating the positional parameters of the camera based on the updated dimensions of the environment. The implementations further includes mapping a layout of the environment captured in the view of the camera.Type: ApplicationFiled: October 13, 2020Publication date: April 14, 2022Inventor: Michael C. STEWART
-
Publication number: 20220114748Abstract: A system for capturing a spatial orientation of a wearable device includes at least one capturing unit that is configured to capture image data in relation to the wearable device; and at least one processor unit that is configured to determine the spatial orientation of the wearable device based on the image data, using a recognition algorithm trained by way of deep learning.Type: ApplicationFiled: September 29, 2021Publication date: April 14, 2022Inventor: Ahmet FIRINTEPE
-
Publication number: 20220114749Abstract: An influence on object expression performed at a transmission destination is reduced. A position information reception unit 11 of an information integration device 1 receives, regarding objects which are measured by a plurality of sensors from a plurality of locations and overlap in any of the locations, position information for each location on areas of the objects. A position information integration unit 13 calculates smallest rectangles or largest rectangles surrounding the objects by using the position information for each location.Type: ApplicationFiled: September 13, 2019Publication date: April 14, 2022Inventors: Keisuke Hasegawa, Masato Ono, Koji Namba, Takahide Hoshide, Tetsuya Yamaguchi, Akira Ono
-
Publication number: 20220114750Abstract: According to embodiments of the present disclosure, a map constructing method, a positioning method, and a wireless communication terminal are provided. The map constructing method includes: a series of environment images of a current; first image feature information of the environment image is obtained, where the first image feature information includes feature point information and descriptor information and based on the first image feature information, a feature point matching is performed on the environment images to select keyframe images; depth information of matched feature points in the keyframe image are acquired, based on the feature point information; and map data of the current environment are generated based on the keyframe images, where the map data includes the image feature information and the depth information of the keyframe image.Type: ApplicationFiled: December 23, 2021Publication date: April 14, 2022Inventors: Yingying SUN, Ke JIN, Taizhang SHANG
-
Publication number: 20220114751Abstract: An electronic device and a method for generating an augmented reality (AR) content in an electronic device are provided. The method includes determining a posture and an action of each object of the plurality of the objects in the scene displayed on a field of view of the electronic device, classifying the posture and the action of each object of the plurality of the objects in the scene, identifying an intent and an interaction of each object from the plurality of objects in the scene based on at least one of the classified posture and the classified action, and generating the AR content for the at least one object in the scene of at least one of the identified intent and the identified interaction of the at least one object.Type: ApplicationFiled: December 7, 2021Publication date: April 14, 2022Inventors: Ramasamy KANNAN, Lokesh Rayasandra BOREGOWDA
-
Publication number: 20220114752Abstract: Technologies for performing sensor fusion include a compute device. The compute device includes circuitry configured to obtain detection data indicative of objects detected by each of multiple sensors of a host system. The detection data includes camera detection data indicative of a two or three dimensional image of detected objects and lidar detection data indicative of depths of detected objects. The circuitry is also configured to merge the detection data from the multiple sensors to define final bounding shapes for the objects.Type: ApplicationFiled: December 23, 2021Publication date: April 14, 2022Applicant: Intel CorporationInventors: Soila Kavulya, Rita Chattopadhyay, Monica Lucia Martinez-Canales
-
Publication number: 20220114753Abstract: Techniques for providing blended physical and virtual reality experiences are disclosed. In some embodiments, a rendering of an actual view of a scene as seen by a camera capturing the scene is obtained from existing assets associated with a virtualized version of the scene and displayed. The actual view of the scene comprises a known environment that includes one or more of a constrained set of objects. The rendering facilitates surfacing information associated with one or more objects comprising the actual view.Type: ApplicationFiled: December 18, 2021Publication date: April 14, 2022Inventors: Brook Seaton, Manu Parmar, Clarence Chui
-
Publication number: 20220114754Abstract: The present disclosure relates to a camera device. The camera device and an electronic device including the same according to an embodiment of the present disclosure include: a color camera; an IR camera; and a processor configured to extract a first region of a color image from the color camera, to extract a second region of an IR image from the IR camera, to calculate error information based on a difference between a gradient of the first region and a gradient of the second region, to compensate for at least one of the color image and the IR image based on the calculated error information, and to output a compensated color image or a compensated IR image.Type: ApplicationFiled: January 10, 2020Publication date: April 14, 2022Applicant: LG ELECTRONICS INC.Inventors: Yunsuk KANG, Chanyong PARK, Eunsung LEE
-
Publication number: 20220114755Abstract: In a method of testing an image sensor, at least one test image is captured using the image sensor that is a device under test (DUT). A composite image is generated based on the at least one test image. A plurality of frequency data are generated by performing frequency signal processing on the composite image. It is determined whether the image sensor is defective by analyzing the plurality of frequency data.Type: ApplicationFiled: August 30, 2021Publication date: April 14, 2022Inventors: Jongbae Lee, Kiryel Ko, Jinmyoung An
-
Publication number: 20220114756Abstract: Systems, and method and computer readable media that store instructions for distance measurement, the method may include obtaining, from a camera of a vehicle, an image of a surroundings of the vehicle; searching, within the image, for an anchor, wherein the anchor is associated with at least one physical dimension of a known value; and when finding the anchor, determining a distance between the camera and the anchor based on, (a) the at least one physical dimension of a known value, (b) an appearance of the at least one physical dimension of a known value in the image, and (c) a distance-to-appearance relationship that maps appearances to distances, wherein the distance-to-appearance relationship is generated by a calibration process that comprises obtaining one or more calibration images of the anchor, and obtaining one or more distance measurements to the anchor.Type: ApplicationFiled: October 13, 2021Publication date: April 14, 2022Applicant: AUTOBRAINS TECHNOLOGIES LTDInventor: Igal Raichelgauz
-
Publication number: 20220114757Abstract: The present invention discloses a processing method of an event data stream, which is from a dynamic vision sensor, the method comprises steps of: acquiring inertial measurement parameters, which corresponds to one segment of the event data stream for generating one frame of image; determining attitude angle data based on the acquired inertial measurement parameters and calibration parameters of the dynamic vision sensor; generating a transformation matrix based on the determined attitude angle data; processing the one segment of the event data stream using the transformation matrix, to generate the processed event data stream. The present invention also discloses a corresponding computing apparatus.Type: ApplicationFiled: December 20, 2021Publication date: April 14, 2022Inventors: Yueyin Zhou, Hua Ren
-
Publication number: 20220114758Abstract: A camera device according to an aspect of the invention includes an imaging unit, an imaging direction adjustment unit, a direction control unit that controls the imaging direction adjustment unit, a camera-side tracking processing unit that analyzes captured image data to acquire first target information indicating the position of a tracking target and outputs the first target information, a camera-side communication unit that receives second target information from a terminal device, and a camera-side target information correction unit that corrects the first target information on the basis of the second target information.Type: ApplicationFiled: December 21, 2021Publication date: April 14, 2022Applicant: FUJIFILM CorporationInventor: Hiroyuki OSHIMA
-
Publication number: 20220114759Abstract: A target detection method, an electronic device, a roadside device and a cloud control platform are provided and relate to the technical field of intelligent transportation.Type: ApplicationFiled: December 23, 2021Publication date: April 14, 2022Inventor: Chunlong XIA
-
Publication number: 20220114760Abstract: An image analysis apparatus includes one or more processors configured to execute (a) acquiring, from a measurement device, spectral images for a plurality of wavelengths, obtained by imaging a measurement target, (b) acquiring a target range in each of the spectral images, (c) performing multivariate analysis of each pixel based on a gradation value of the pixel for each wavelength in the target range, (d) generating an analysis image including an analysis result of the multivariate analysis for each pixel in the target range, and (e) storing the generated analysis image into a memory.Type: ApplicationFiled: October 6, 2021Publication date: April 14, 2022Inventor: Kei KUDO
-
Publication number: 20220114761Abstract: Disclosed herein is a method and apparatus for determining decoded data values for a data element of an array of data elements from an encoded representation of the array of data elements, wherein the decoding comprises determining which, if any, bits are missing for the data value(s) for the data element and selecting based on this an adjustment scheme to be applied for the data value(s) for the data element from a plurality of available adjustment schemes. Also disclosed are a method and apparatus for generating an encoding hint comprising an indication of the one or more encoding parameters that were used to generate the encoded representation which encoding hint can then be associated with the decoded data and then used when the decoded data is subsequently to be encoded.Type: ApplicationFiled: October 12, 2020Publication date: April 14, 2022Applicant: Arm LimitedInventors: Bjorn Fredrik Wictorin, III, Jakob Axel Fries
-
Publication number: 20220114762Abstract: Disclosed herein are a method for compressing a point cloud based on global motion prediction and compensation and an apparatus for the same. The method includes receiving 3D point cloud data configured with point cloud frames that represent continuous global motion; dividing the point cloud data into point cloud data segments using a histogram generated based on the Z-axis of the point cloud data; performing a global motion search based on an occupancy map for each of the point cloud data segments; and performing motion compression for the point cloud data based on the result of the global motion search performed for each of the point cloud data segments.Type: ApplicationFiled: September 16, 2021Publication date: April 14, 2022Inventors: Hyuk-Min KWON, Jin-Young LEE, Kyu-Heon KIM, Jun-Sik KIM
-
Publication number: 20220114763Abstract: A method of encoding point cloud data includes determining an amount by which a laser turns for determining points in a point cloud represented by the point cloud data, generating a syntax element indicative of the amount by which the laser turns, wherein a value of the syntax element is a defined value less than the amount by which the laser turns, and signaling the syntax element.Type: ApplicationFiled: September 8, 2021Publication date: April 14, 2022Inventors: Bappaditya Ray, Adarsh Krishnan Ramasubramonian, Geert Van der Auwera, Marta Karczewicz
-
Publication number: 20220114764Abstract: Embodiments are disclosed for ground plane estimation (GPE) using a LiDAR semantic network. In an embodiment, a method comprises: obtaining a point cloud from a depth sensor of a vehicle operating in an environment; encoding the point cloud; estimating, using a deep learning network with the encoded point cloud as input, a ground plane in the environment; planning a path through the environment based on a drivable area of the estimated ground plane; and operating the vehicle, the vehicle along the path. The deep learning network includes a two-dimensional (2D) convolutional backbone, a detection head for detecting objects and a GPE head for estimating the ground plane. In an embodiment, point pillars are used to encode the point cloud.Type: ApplicationFiled: October 9, 2020Publication date: April 14, 2022Inventors: Oscar Olof Beijbom, Venice Erin Baylon Liong
-
Publication number: 20220114765Abstract: Described is a method for compressing measurement data of a volume which comprises an object, wherein a digital representation of the object comprising a plurality of image information items of the object is generated by the measurement. The method comprises: providing an analysis specification for at least one predetermined region in the measurement volume; determining the measurement data in the measurement volume; defining a subset of the measurement data which corresponds to the at least one predetermined region of the analysis specification; selecting at least one compression rate for the subset on the basis of the analysis specification; selecting a first compression method for a remainder of the measurement data outside the subset, the first compression method having a compression rate; compressing the subset with the selected at least one compression rate, and compressing the remainder of the measurement data by way of the first compression method.Type: ApplicationFiled: August 5, 2019Publication date: April 14, 2022Inventors: Matthias Flessner, Christoph Poliwoda, Christof Reinhart, Thomas Günther
-
Publication number: 20220114766Abstract: A three-dimensional data encoding method includes: assigning three-dimensional points to one of layers, based on items of geometry information of the three-dimensional points; searching three-dimensional points surrounding a current three-dimensional point to be encoded, to select, from the three-dimensional points, a three-dimensional point to be referred to when a predicted value of attribute information of the current three-dimensional point is calculated, the current three-dimensional point belonging to a first layer among the layers; and calculating the predicted value of the attribute information of the current three-dimensional point using the three-dimensional point selected. In the searching of the three-dimensional points, a search range for a same layer as the current three-dimensional point is different from a search range for a layer higher than the first layer.Type: ApplicationFiled: December 21, 2021Publication date: April 14, 2022Inventors: Toshiyasu SUGIO, Noritaka IGUCHI
-
Publication number: 20220114767Abstract: A method comprising: receiving a reference facial image of a first subject, wherein the reference image represents a specified makeup style applied to a face of the first subject; receiving a target facial image of a target subject without makeup; performing pixel-wise alignment of the reference image to the target image; generating a translation of the reference image to obtain a de-makeup version of the reference image representing the face of the first subject without the specified makeup style; calculating an appearance modification contribution representing a difference between the reference image and the de-makeup version; and adding the calculated appearance modification contribution to the target image, to construct a modified the target image which represents the specified makeup style applied to a face of the target subject.Type: ApplicationFiled: October 8, 2020Publication date: April 14, 2022Inventors: Matan SELA, Itai CASPI, Mira AWWAD-KHREISH
-
Publication number: 20220114768Abstract: An information processing device according to the present disclosure includes an inference unit that infers, on the basis of a result of checking a first image against a plurality of images taken in the past, a third image that is an image taken in the past at a position corresponding to a second image to be taken at a next timing of the first image, and a generation unit that generates a fourth image that is an image obtained by correcting the second image on the basis of the third image in a case where the second image is acquired.Type: ApplicationFiled: February 6, 2020Publication date: April 14, 2022Applicant: SONY GROUP CORPORATIONInventors: Kazunori KAMIO, Toshiyuki SASAKI
-
Publication number: 20220114769Abstract: An imaging apparatus includes a body contour imager that obtains a body contour image that shows a body contour of a target portion of a subject by detecting terahertz light radiated from the target portion of the subject with his/her apparel on, an outside shape imager that obtains an outside shape image that shows an outside shape of the apparel in the target portion, combination means that generates a combined image by combining the body contour image and the outside shape image with each other, and output means that provides output of the combined image generated by the combination means.Type: ApplicationFiled: October 7, 2021Publication date: April 14, 2022Inventors: Takashi OMORI, Atsushi KASATANI
-
Publication number: 20220114770Abstract: A method, apparatus, and computer program product for processing images by using a convolutional neural network (CNN) are proposed. An original image is received from an image source. The original image has a predefined size and high resolution, and is represented in a first color space supported by the image source. Then, an intermediate image is obtained by downscaling the original image in the first color space, and converted from the first color space to a second color space. Next, a restored image is obtained by upscaling the converted intermediate image to the predefined size of the original image. Said upscaling is performed by using the CNN on the original image and the converted intermediate image as inputs and return the restored image. The CNN is pre-trained on a set of triplets, comprising a past original image, a converted past intermediate image and a past restored image.Type: ApplicationFiled: December 20, 2021Publication date: April 14, 2022Inventors: Viktor Vladimirovich SMIRNOV, Youliang YAN, Tao WANG, Xueyi ZOU
-
Publication number: 20220114771Abstract: For reconstruction in medical imaging, such as reconstruction in MR imaging, the number of iterations in deep learning-based reconstruction may be reduced by including a learnable extrapolation in one or more iterations. Regularization may be provided in fewer than all of the iterations of the reconstruction. The result of either approach alone or both together is better quality reconstruction and/or less computationally expensive reconstruction.Type: ApplicationFiled: November 13, 2020Publication date: April 14, 2022Inventors: Simon Arberet, Mariappan S. Nadar, Boris Mailhe, Marcel Dominik Nickel
-
Publication number: 20220114772Abstract: An image processing apparatus includes a first image creation unit, a second image creation unit, and a CNN processing unit. The first image creation unit creates a first tomographic image of an m-th frame using a data group in list data included in the m-th frame. The second image creation unit creates a second tomographic image using a data group in the list data having a data amount larger than that of the data group used in creating the first tomographic image. The CNN processing unit inputs the second tomographic image to a CNN, outputs an output tomographic image from the CNN, trains the CNN based on a comparison between the output tomographic image and the first tomographic image, and repeats the training operation to generate the output tomographic image in each training.Type: ApplicationFiled: January 29, 2020Publication date: April 14, 2022Applicant: HAMAMATSU PHOTONICS K.K.Inventors: Fumio HASHIMOTO, Kibo OTE
-
Publication number: 20220114773Abstract: A generation system and a generation method for a perspective image are disclosed. The present disclosure acquires a tomographic data set of a target object, determines a rotation information corresponding to a designated perspective of the target object, makes the perspective face a projection plane by rotating the tomographic data set or moving the projection plane based on the rotation information, and merges multiple slice images of the tomographic data set toward the projection plane to obtain a 2D image of the perspective of the target object. The present disclosure can effectively generate the 2D image of the designated perspective of the target object.Type: ApplicationFiled: October 12, 2021Publication date: April 14, 2022Inventors: Tien-He CHEN, Che-Min CHEN, Jia-Wei YAN
-
Publication number: 20220114774Abstract: A system and method for automatically generating and rendering a report data structure is provided. The report data structure is formed in a platform independent manner that includes all data for transactions used in the report. The system analyzes the transactions to be included in the report and selects the type of display component based on a ranking score to best highlight the data contained therein.Type: ApplicationFiled: October 14, 2020Publication date: April 14, 2022Inventors: Manuel Deschamps Rascon, Mark Eli Moreau Roseboom, Jonathan Le, Michael Furtak, Jeffrey Hall Seibert, JR., Wayne Chang
-
Publication number: 20220114775Abstract: A control device comprising: a control unit that controls such that visual information is continuously presented from a first position to a second position which is different from the first position when a first state in which an operation target which is a target to be operated by a user is operated by an operation executing unit that operates the operation target instead of the user is switched to a second state in which the operation target is operated by the user.Type: ApplicationFiled: October 7, 2021Publication date: April 14, 2022Applicant: KABUSHIKI KAISHA TOKAI RIKA DENKI SEISAKUSHOInventors: Aya KIMURA, Masahiko MIYATA, Makoto HARAZAWA, Takeshi OHNISHI
-
Publication number: 20220114776Abstract: Provided are an emoticon package generation method and apparatus, a device and a medium which relate to the field of graphic processing and in particular to Internet technologies. The specific implementation solution is: determining at least one of associated text of an emoticon picture or a similar emoticon package of an emoticon picture, where the associated text of the emoticon picture includes at least one of main part information, scenario information, emotion information, action information or connotation information; determining target matching text from the at least one of the associated text of the emoticon picture or associated text of the similar emoticon package; and superimposing the target matching text on the emoticon picture to generate a new emoticon package.Type: ApplicationFiled: July 3, 2020Publication date: April 14, 2022Applicant: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.Inventors: Xianglong XU, Jianfeng ZHU, Jiahua CUI, Jing XIANG, Hongtao LI, Chen HAN, Shufei LIN, Ying SU, Shicao LI, Huiqin LI, Xiaochu GAN, Fei GAO, Jiale YANG, Xueyun MA, Guohong LI
-
Publication number: 20220114777Abstract: Methods, apparatuses, devices and computer-readable storage media for action transfer are provided. In one aspect, a method includes: obtaining an initial video involving an action sequence of an initial object, identifying a two-dimensional skeleton keypoint sequence of the initial object from plural frames of image in the initial video, converting the two-dimensional skeleton keypoint sequence of the initial object into a three-dimensional skeleton keypoint sequence of a target object, and generating a target video involving an action sequence of the target object based on the three-dimensional skeleton keypoint sequence of the target object.Type: ApplicationFiled: December 20, 2021Publication date: April 14, 2022Inventors: Wenyan WU, Wentao ZHU, Zhuoqian YANG
-
Publication number: 20220114778Abstract: Telematics systems and methods are described for generating interactive animated guided user interfaces (GUIs). A telematics cloud platform is configured to receive vehicular telematics data from a telematics device onboard a vehicle. A GUI value compression component determines, based on the vehicular telematics data, a plurality of GUI position values and a plurality of corresponding GUI time values. A geospatial animation app receives the plurality of GUI position values and the plurality of corresponding GUI time values. The geospatial animation app implements an interactive animated GUI that renders a plurality of geospatial graphics or graphical routes on a geographic area map via a display device. The geospatial graphics or graphical routes are rendered to have different visual forms based on differences between respective GUI position values and corresponding GUI time values.Type: ApplicationFiled: December 18, 2021Publication date: April 14, 2022Inventors: Micah Wind Russo, Theobolt N. Leung, Gareth Finucane, Kenneth Jason Sanchez
-
Publication number: 20220114779Abstract: A graphics processing hardware pipeline is arranged to perform an edge test or a depth calculation. Each hardware arrangement includes a microtile component hardware element, multiple pixel component hardware elements, one or more subsample component hardware elements and a final addition and comparison unit. The microtile component hardware element calculates a first output using a sum-of-products and coordinates of a microtile within a tile in the rendering space. Each pixel component hardware element calculates a different second output using the sum-of-products and coordinates for different pixels defined relative to an origin of the microtile. The subsample component hardware element calculates a third output using the sum-of-products and coordinates for a subsample position defined relative to an origin of a pixel.Type: ApplicationFiled: December 21, 2021Publication date: April 14, 2022Inventor: Casper Van Benthem
-
Publication number: 20220114780Abstract: Ray tracing systems and methods are described for processing rays. A parent shader is executed for a ray. The parent shader includes a shader recursion instruction which invokes a child shader. The execution of the parent shader for the ray is suspended. Intermediate data for the parent shader is stored in a heap of memory, wherein the intermediate data comprises state data and payload data. Storing intermediate data comprises allocating a first set of registers in the heap of memory for storing payload data, and allocating a second set of registers in the heap of memory for storing state data. When the parent shader is ready to resume, intermediate data for the parent shader is read from the heap of memory, and the execution of the parent shader for the ray is resumed.Type: ApplicationFiled: September 24, 2021Publication date: April 14, 2022Inventors: Daniel Barnard, Alistair Goudie
-
Publication number: 20220114781Abstract: Apparatus and method for encoding sub-primitives to improve ray tracing efficiency. For example, one embodiment of an apparatus comprises: a ray generator to generate a plurality of rays in a ray tracing graphics pipeline; a sub-primitive generator to subdivide each primitive of a plurality of primitives into a plurality of sub-primitives; a sub-primitive encoder to identify a first subset of the plurality of sub-primitives as being fully transparent and to identify a second subset of the plurality of sub-primitives as being fully opaque; and wherein the first subset of the plurality of primitives identified as being fully transparent are culled prior to further processing of each respective primitive.Type: ApplicationFiled: October 26, 2021Publication date: April 14, 2022Inventor: Holger GRUEN
-
Publication number: 20220114782Abstract: An apparatus comprises a receiver (301) for receiving an image representation of a scene. A determiner (305) determines viewer poses for a viewer with respect to a viewer coordinate system. An aligner (307) aligns a scene coordinate system with the viewer coordinate system by aligning a scene reference position with a viewer reference position in the viewer coordinate system. A renderer (303) renders view images for different viewer poses in response to the image representation and the alignment of the scene coordinate system with the viewer coordinate system. An offset processor (309) determines the viewer reference position in response to an alignment viewer pose where the viewer reference position is dependent on an orientation of the alignment viewer pose and has an offset with respect to a viewer eye position for the alignment viewer pose. The offset includes an offset component in a direction opposite to a view direction of the viewer eye position.Type: ApplicationFiled: January 19, 2020Publication date: April 14, 2022Inventors: FONS BRULS, CHRISTIAAN VAREKAMP, BART KROON
-
Publication number: 20220114783Abstract: An image processing apparatus includes processing circuitry configured to render an image from volumetric image data based on illumination from at least one simulated light source. The illumination is determined from a current portion of light intensity and at least one trailing portion of light intensity if a position or other property of the at least one simulated light source is changed.Type: ApplicationFiled: October 9, 2020Publication date: April 14, 2022Applicant: CANON MEDICAL SYSTEMS CORPORATIONInventor: Magnus WAHRENBERG
-
Publication number: 20220114784Abstract: A device for generating a model of an object with superposition image data in a virtual environment including a plurality of cameras, configured to generate temperature false color images of the object and a background of the object, a computer processor configured to remove the background of the object from the temperature false color images thereby obtaining an image data stream of the object, to extract, from the image data stream, a model of the object from a real environment, to insert the extracted model into the virtual environment, and to superpose at least part of the model with superposition image data so as to generate the model of the object with superposition image data in the virtual environment, and a monitor configured to display the model of the object with superposition image data in the virtual environment.Type: ApplicationFiled: December 20, 2021Publication date: April 14, 2022Inventor: Peter SCHICKEL
-
Publication number: 20220114785Abstract: A three-dimensional model generation method includes: generating a first three-dimensional model of a predetermined region from first frames; projecting the first three-dimensional model onto at least one second frame; and generating a second three-dimensional model in accordance with first pixels in the at least one second frame onto which the first three-dimensional model is not projected.Type: ApplicationFiled: December 23, 2021Publication date: April 14, 2022Inventors: Kensho TERANISHI, Toru MATSUNOBU, Toshiyasu SUGIO, Satoshi YOSHIKAWA, Masaki FUKUDA