Patents Issued in October 31, 2024
-
Publication number: 20240362799Abstract: A human body motion capture method and apparatus, a device, a medium, and a program are provided. Pose information of a headset is obtained. A human body image shot by the headset is obtained. Motion information of a key node of a human body is determined based on the human body image, and the motion information of the key node includes position information or pose information of the key node. Motion information of a head is determined based on the pose information of the headset, and the motion information of the head includes position information or pose information of the head. Posture information of the human body is determined by using an inverse kinematics method based on the motion information of the key node and the motion information of the head.Type: ApplicationFiled: April 19, 2024Publication date: October 31, 2024Inventors: Zeyi LIN, Yang ZHANG, Zhen FAN
-
Publication number: 20240362800Abstract: A tracking method for a tracking measurement system including a tracking apparatus is provided. The method includes acquiring a current tracking distance between the tracking apparatus and a to-be-tracked object, and determining, based on the current tracking distance, a tracking parameter level for the tracking apparatus.Type: ApplicationFiled: April 25, 2024Publication date: October 31, 2024Applicant: SCANTECH (HANGZHOU) CO., LTD.Inventors: Shangjian Chen, Jiangfeng Wang, Jun Zheng, Yuju Yang, Lidan Zhang
-
Publication number: 20240362801Abstract: An optical tactile sensor module is provided. The optical tactile sensor includes a housing, an elastic part including a first surface for contacting an object, a plurality of markers disposed at the elastic part and being adjacent to the first surface of the elastic part, and a camera unit configured to capture an image of a movement of at least some of the plurality of the markers on the first surface when the object contacts the first surface of the elastic part, wherein the plurality of markers include a first marker including a first color, and a second marker including a second color different from the first color.Type: ApplicationFiled: July 9, 2024Publication date: October 31, 2024Inventors: Hyunseok HONG, Jinpyo GWAK
-
Publication number: 20240362802Abstract: A system determining motion models for aligning scene content captured by different image sensors is configurable to access a first motion model generated based upon a set of feature correspondences that includes (i) an inlier set used to determine model parameters for the first motion model and (ii) an outlier set. The system is also configurable to define a modified set of feature correspondences that includes the outlier set from the set of feature correspondences. The system is also configurable to generate a second motion model by using the modified set of feature correspondences to determine model parameters for the second motion model.Type: ApplicationFiled: April 25, 2023Publication date: October 31, 2024Inventors: Michael BLEYER, Pascal PARÉ, Paul LEE, Aleksander Bogdan BAPST
-
Publication number: 20240362803Abstract: A method includes obtaining an image of a scene using an imaging sensor on a vehicle, where the image captures at least one of: one or more objects around the vehicle or one or more lane marking lines. The method also includes identifying extrinsic calibration parameters associated with the imaging sensor. The method further includes, for a specified pixel in the image, converting a position of the specified pixel within a pixel coordinate system associated with the image to a corresponding position within a road coordinate system based on the extrinsic calibration parameters. The specified pixel represents a point in the image associated with at least one of the one or more objects or the one or more lane marking lines. In addition, the method includes identifying a distance to the point in the image based on the corresponding position of the specified pixel within the road coordinate system.Type: ApplicationFiled: April 27, 2023Publication date: October 31, 2024Inventor: Andrew C. Kobach
-
Publication number: 20240362804Abstract: An image processing apparatus is provided. The apparatus acquires input data including a captured image and/or information relating to the captured image. The apparatus acquires a feature of the input data by performing processing on the input data using a neural network. The apparatus generates an integrated feature by integrating the feature and at least some of the input data. The apparatus generates an estimation result of at least one of a defocus range and a depth range for a subject within the captured image, by performing processing on the integrated feature.Type: ApplicationFiled: April 18, 2024Publication date: October 31, 2024Inventor: Shuhei OGAWA
-
Publication number: 20240362805Abstract: Systems and techniques are provided for wireless communication. For example, a process can include obtaining a first image of an environment, the first image including a pattern of light. The process can further include determining a pixel window for a pixel of the first image; determining a set of transform pixels based on the pattern of light, wherein the set of transform pixels are a subset of pixels from the pixel window; performing a census transform based on the pixel and the set of transform pixels to obtain census transform information for the first image; and generating depth data based on the census transform information for the first image and census transform information for a second image.Type: ApplicationFiled: April 28, 2023Publication date: October 31, 2024Inventors: James Wilson NASH, Kalin Mitkov ATANASSOV, Jason CHUNG
-
Publication number: 20240362806Abstract: Disclosed is a method and a scanning device for digital image scanning of a surface of an object. The scanning device comprising a light source for illuminating the object. The scanning device comprising a first image sensor configured for capturing a first set of images of a first region of the object, where the first region is illuminated by the light source. The scanning device comprising a second image sensor configured for capturing a second set of images of a second region of the object, where the second region is illuminated by the light source. The scanning device comprising a laser component configured for projecting laser light onto the object. The projected laser light has a shape of a cross on the object. The first image sensor and the second image sensor are configured to capture/cover an overlapping region on the object. The overlapping region comprising a part of the first region of the object and a part of the second region of the object.Type: ApplicationFiled: April 18, 2024Publication date: October 31, 2024Inventors: Bo Kjær Olsen, Søren Thuun Jensen, Nis Engholm
-
Publication number: 20240362807Abstract: An example device for processing image data includes a processing unit configured to: receive, from a camera of a vehicle, a first image frame at a first time and a second image frame at a second time; receive, from an odometry unit of the vehicle, a first position of the vehicle at the first time and a second position of the vehicle at a second time; calculate a pose difference value representing a difference between the second and first positions; form a pose frame having a size corresponding to the first and second image frames and sample values including the pose difference value; and provide the first and second image frames and the pose frame to a neural networking unit configured to calculate depth for objects in the first image frame and the second image frame, the depth for the objects representing distances between the objects and the vehicle.Type: ApplicationFiled: April 28, 2023Publication date: October 31, 2024Inventors: Yunxiao Shi, Amin Ansari, Sai Madhuraj Jadhav, Avdhut Joshi
-
Publication number: 20240362808Abstract: Examples disclosed herein may involve a computing system that is operable to (i) receive image data and corresponding secondary sensor data, (ii) generate a reconstruction of a map from the image data, wherein the reconstruction comprises sequential pose information, (iii) determine constraints from the secondary sensor data, and (iv) validate the reconstruction of the map by applying the determined constraints from the secondary sensor data to the determined sequential pose information from the reconstruction of the map and determining whether the sequential pose information fails to satisfy any of the constraints determined from the secondary sensor data.Type: ApplicationFiled: May 6, 2024Publication date: October 31, 2024Inventors: Luca Del Pero, Robert Kesten
-
Publication number: 20240362809Abstract: An image processing device includes a measurement part configured to process a first image outputted from a first imaging part and a second image outputted from a second imaging part placed so as to be separated from the first imaging part, and search for a pixel block corresponding to a target pixel block on the first image, in a search range defined on the second image. The search range extends in a direction of separation between the first imaging part and the second imaging part. The measurement part executes a plurality of search processes for which search start positions and search end positions are different from each other, for the search range.Type: ApplicationFiled: July 10, 2024Publication date: October 31, 2024Inventors: Masanori ERA, Masaharu FUKAKUSA
-
Publication number: 20240362810Abstract: In variants, the method for change analysis can include detecting a rare change in a geographic region by comparing a first representation and a second representation, extracted from a first geographic region measurement and a second geographic region measurement sampled at a first time and a second time, respectively, using a common-change-agnostic model.Type: ApplicationFiled: March 18, 2024Publication date: October 31, 2024Applicant: Cape Analytics, Inc.Inventors: Matthieu Portail, Christopher Wegg, Fabian Richter
-
Publication number: 20240362811Abstract: Disclosed is an incision foil made of a sterile, thin adhesive plastic film with a defined pattern printed on it (e.g. a fine grid pattern) which can be stuck e.g. on a patient's skin surface and which marks the anatomical region of interest. Using a camera, images are acquired of the attached film and the deformation of the pattern is digitized. With a computer vision algorithm the surface of the patient, which corresponds to the surface of the film, is reconstructed from the detected pattern features in the images in comparison to the known original undeformed pattern. Disclosed is also a method for determining a geometry of the surface of the patient using the incision foil.Type: ApplicationFiled: July 5, 2024Publication date: October 31, 2024Inventor: Christian Schmaler
-
Publication number: 20240362812Abstract: A volume measurement method and apparatus using a depth camera, and a computer-readable medium, are provided. A first depth image of a reference countertop is acquired which includes coordinate and depth data of pixels, the reference countertop adapted for placement of a measured object. The coordinate and depth data are substituted into countertop empirical equations to obtain simulated images. Each of the countertop empirical equations represent a height variation pattern of the reference countertop. A correlation between each of the simulated images and the first depth image is calculated. A simulated image with the highest correlation is used as a target image of the reference countertop. A countertop height of the reference countertop is obtained based on the target image, a height of the measured object is calculated using the countertop height, and a volume of the measured object is calculated using the height. These disclosures improve accuracy of object volume measurement.Type: ApplicationFiled: April 24, 2022Publication date: October 31, 2024Inventors: Conghan Cao, Jiang Yin, Song Zhang, Chao Fang, Hongqing Song, Zhenyu Dai, Fenping Qian, Huanbing Cheng, Shenhui Wang
-
Publication number: 20240362813Abstract: A computer-implemented method for determination of a volume of calcium in an aorta comprises receiving a medical image dataset of the aorta; determining an aorta center line of the aorta based on the medical image dataset; determining landmarks on the aorta based on the medical image dataset; determining an aorta mask of the aorta based on the medical image dataset; applying the aorta mask to the medical image dataset to create a masked medical image dataset; creating a calcium mask based on the masked medical image dataset; determining at least one aorta segment based on the aorta mask, the aorta center line and the landmarks; determining the volume of calcium of the at least one aorta segment based on the calcium mask and the at least one aorta segment; and providing the volume of calcium of the at least one aorta segment.Type: ApplicationFiled: April 25, 2024Publication date: October 31, 2024Applicant: Siemens Healthineers AGInventors: Jonathan SPERL, Saikiran RAPAKA, Juraj SUTIAK, Jana ORAVCOVA
-
Publication number: 20240362814Abstract: As a method for inspecting a folding portion formed by folding an outer portion of the secondary battery case, the method including: acquiring an image of a target inspection portion, which is a portion of the folding portion in which measurement is performed; extracting N curve profiles including a folding vertex of the folding portion from an inside of the target inspection portion; and calculating a radius of curvature around the folding vertex from the N curve profiles.Type: ApplicationFiled: April 25, 2024Publication date: October 31, 2024Inventors: Seung Hyeon CHEON, Sung Yeop KIM, Jun Hee JUNG, Seung Won CHOI
-
Publication number: 20240362815Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify two-dimensional images via scene-based editing using three-dimensional representations of the two-dimensional images. For instance, in one or more embodiments, the disclosed systems utilize three-dimensional representations of two-dimensional images to generate and modify shadows in the two-dimensional images according to various shadow maps. Additionally, the disclosed systems utilize three-dimensional representations of two-dimensional images to modify humans in the two-dimensional images. The disclosed systems also utilize three-dimensional representations of two-dimensional images to provide scene scale estimation via scale fields of the two-dimensional images. In some embodiments, the disclosed systems utilizes three-dimensional representations of two-dimensional images to generate and visualize 3D planar surfaces for modifying objects in two-dimensional images.Type: ApplicationFiled: April 20, 2023Publication date: October 31, 2024Inventors: Jeremy Joachim, Archie Bagnall
-
Publication number: 20240362816Abstract: An electronic device, a method and a computer program product for identifying objects in an image. The method includes capturing a first image within a field of view of a first camera of an electronic device and receiving object data from a second electronic device. The method further includes determining based on the first object data, if the first image contains a first tagged object within the field of view. In response to determining that the first image contains the first tagged object within the field of view, the method further includes mapping, based on the first object data, the first tagged object to a first pixel location within the first image and generating first meta-data associated with the first image. The method further includes storing the first image with the first meta-data to a memory of the electronic device.Type: ApplicationFiled: April 30, 2023Publication date: October 31, 2024Inventors: SANJEEV KUMAR POLURU VENKATA, JEEVITHA JAYANTH, SINDHU CHAMATHAKUNDIL
-
Publication number: 20240362817Abstract: A position estimation device that estimates a position of a moving object includes: an acquisition unit that acquires a captured image including the moving object from an imaging device; a position calculation unit that calculates a local coordinate point that indicates the position of the moving object in a local coordinate system using the captured image; a type identification unit that identifies a type of the moving object included in the captured image; and a position transformation unit that transforms the local coordinate point into a moving object coordinate point that indicates the position of the moving object in a global coordinate system using an imaging parameter calculated based on the position of the imaging device in the global coordinate system and a moving object parameter determined according to the type of the moving object.Type: ApplicationFiled: March 20, 2024Publication date: October 31, 2024Applicant: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Kento IWAHORI, Daiki YOKOYAMA
-
Publication number: 20240362818Abstract: A method of determining a pose of a target object in a query image may include: obtaining a query image; obtaining a plurality of reference images corresponding to the query image; and determining a pose of a target object based on a first semantic feature corresponding to the query image and a second semantic feature corresponding to each of the plurality of reference images.Type: ApplicationFiled: April 26, 2024Publication date: October 31, 2024Applicant: Samsung Electronics Co., Ltd.Inventors: Jingrui SONG, Weiming LI, Qiang WANG, Soonyong CHO, Xiaoxuan YU, SUNG YOUNG HUN
-
Publication number: 20240362819Abstract: An image processing apparatus includes at least one processor, and a memory coupled to the at least one processor, the memory storing instructions that, when executed by the at least one processor, cause the at least one processor to detect a region of a human body from an image, generate, from the image, a first clipped image including the region of the human body and a second clipped image different from the first clipped image and including the region of the human body, detect joint points of the human body from the first clipped image to generate joint point information on the human body, convert the joint point information so as to have spatial information coincident with spatial information on the second clipped image, and estimate a posture of the human body based on the converted joint point information and the second clipped image.Type: ApplicationFiled: April 23, 2024Publication date: October 31, 2024Inventor: SHINJI YAMAMOTO
-
Publication number: 20240362820Abstract: An image processing method may be applied to a movable platform, and the movable platform may comprise a first vision sensor and a second vision sensor. The method may include obtaining a first localized image of the first vision sensor within an overlapping visual range, obtaining a second localized image of the second vision sensor within the overlapping visual range; acquiring an image captured by the first vision sensor at a first moment and an image captured at a second moment, the first vision sensor being positioned in space at the first moment differently than at the second moment; and determining a relative positional relationship between an object in the space where the movable platform is located and the movable platform based on the first localized image, the second localized image, the image captured at the first moment and the image captured at the second moment.Type: ApplicationFiled: July 9, 2024Publication date: October 31, 2024Applicant: SZ DJI TECHNOLOGY CO., LTD.Inventors: Jian YANG, You ZHOU, Zhenfei YANG
-
Publication number: 20240362821Abstract: In implementations of systems for generating image metadata using a compact color space, a computing device implements a color system to receive input data describing pixels of a digital image and corresponding RGB values of the pixels. The color system assigns a color of a compact color space to each of the pixels based on the corresponding RGB values of the pixels. The compact color space includes a subset of colors included in an RGB color space. The color system computes a histogram of colors of the compact color space and determines a particular color of the compact color space based on the histogram. The color system generates color metadata for the digital image describing a natural language name of the particular color of the compact color space.Type: ApplicationFiled: April 27, 2023Publication date: October 31, 2024Applicant: Adobe Inc.Inventors: Nimish Srivastav, Shankar Venkitachalam, Satya Deep Maheshwari, Mihir Naware, Deepak Pai
-
Publication number: 20240362822Abstract: There is provided a signal processing device and a signal processing method capable of accurately acquiring distance information in a case where a transparent subject is present. The signal processing device includes an acquisition unit that acquires histogram data of a flight time of irradiation light to a subject, a transparent subject determination unit that determines whether or not the subject is a transparent subject on the basis of peak information indicated by the histogram data and three-dimensional coordinates of the subject calculated on the basis of the histogram data, and an output unit that outputs the three-dimensional coordinates of the subject in which color information or three-dimensional coordinates of the subject is corrected on the basis of a transparent subject determination result of the transparent subject determination unit.Type: ApplicationFiled: February 22, 2022Publication date: October 31, 2024Applicant: Sony Group CorporationInventors: Motonobu FUJIOKA, Yusuke MORIUCHI, Kenichiro NAKAMURA, Takayuki SASAKI, Hajime MIHARA
-
Publication number: 20240362823Abstract: An encoding method and a decoding method are provided. The encoding method includes the following. A tree structure for geometry information of a point cloud is obtained, where the tree structure has at least two node-layers, and each of the at least two node-layers includes at least one node. Planar-encoding-mode eligibility corresponding to a first node-layer of the tree structure is determined. Whether a first node in the first node-layer is encoded using a planar encoding mode is determined according to the planar-encoding-mode eligibility.Type: ApplicationFiled: July 10, 2024Publication date: October 31, 2024Inventors: Fuzheng YANG, Junyan HUO, Ming LI
-
Publication number: 20240362824Abstract: The method of encoding/decoding a point cloud sensed by any type of sensor following a sensing path obtains coarse representations of sensed points and encodes the sensing path and the coarse representations. The sensing path and coarse representations of points are decoded, and points of the point cloud are reconstructed from the decoded sensing path and the decoded coarse representations. The coarse representations of sensed points of the point cloud are coarse points defined in a two-dimensional angular coordinate space, and a coarse point is obtained by shifting a sensing point in the two-dimensional angular coordinate space with shifting values that depend on the sensor index associated with the sensor that sensed the point P of the point cloud.Type: ApplicationFiled: June 17, 2022Publication date: October 31, 2024Inventors: Sebastien LASSERRE, Jonathan TAQUET
-
Publication number: 20240362825Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for utilizing a transformer-based encoder-decoder neural network architecture for generating alpha mattes for digital images. Specifically, the disclosed system utilizes a transformer encoder to generate patch-based encodings from a digital image and a trimap segmentation by generating patch encodings for image patches and comparing the patch encodings to areas of the digital image. Additionally, the disclosed system generates modified patch-based encodings utilizing a plurality of neural network layers. The disclosed system also generates an alpha matte for the digital image from the patch-based encodings utilizing a decoder that includes a plurality of upsampling layers connected to a plurality of neural network layers via skip connections.Type: ApplicationFiled: July 2, 2024Publication date: October 31, 2024Inventors: Brian Price, Yutong Dai, He Zhang
-
Publication number: 20240362826Abstract: A server device provides a social media platform. The server device includes one or more processors configured to execute instructions stored in associated memory to send instructions to a client device to cause the client device to display a graphical user interface (GUI) of the social media platform, receive at least one image of a face of a user of the client device and a selection of at least one predetermined style via the GUI, using the at least one image and the at least one predetermined style as input, generate a plurality of artificial intelligence (AI) profile pictures via an AI model, and send the plurality of AI profile pictures to the client device.Type: ApplicationFiled: July 19, 2023Publication date: October 31, 2024Inventors: Zichun Wang, Xiaotong Ma, Kin Chung Wong, Jonathan Guzi, Jing Liu, Hao Qiu, Siqi Tan, Siyuan Chen
-
Publication number: 20240362827Abstract: A method of assisting driving a vehicle. The method is implemented by a system including an HMD and a positioning module mounted on the vehicle, wherein the HMD includes a screen and a pair of video cameras located on opposite sides of the screen along a main length direction of the screen.Type: ApplicationFiled: March 18, 2022Publication date: October 31, 2024Applicant: POLITECNICO DI MILANOInventors: Matteo CORNO, Luca FRANCESCHETTI, Sergio Matteo SAVARESI, Marco CENTURIONI
-
Publication number: 20240362828Abstract: The present disclosure relates to a video generation method and apparatus, a device, and a storage medium.Type: ApplicationFiled: April 24, 2022Publication date: October 31, 2024Inventors: Miao HUA, Bingchuan LI
-
Publication number: 20240362829Abstract: The present disclosure relates to systems, methods, and computer-readable media for utilizing a presentation enhancement system to provide enhancements for hybrid meetings (e.g., meetings where shared content is provided to multiple displays). For example, the presentation enhancement system generates dynamic digital content in response to detecting physical interactions of a presenting user with shared digital content on a display device. Further, the presentation enhancement system augments the shared digital content provided to a remote display device with the dynamic digital content. In addition to providing augmented shared digital content to remote display devices, the presentation enhancement system can perform several additional actions on the shared digital content based on the detected physical interactions.Type: ApplicationFiled: July 8, 2024Publication date: October 31, 2024Inventor: Mac Donald Scarpino JOHNSTON
-
Publication number: 20240362830Abstract: A computer-implemented method includes receiving, by a computing device, a particular textual description of a scene. The method also includes applying a neural network for text-to-image generation to generate an output image rendition of the scene, the neural network having been trained to cause two image renditions associated with a same textual description to attract each other and two image renditions associated with different textual descriptions to repel each other based on mutual information between a plurality of corresponding pairs, wherein the plurality of corresponding pairs comprise an image-to-image pair and a text-to-image pair. The method further includes predicting the output image rendition of the scene.Type: ApplicationFiled: July 11, 2024Publication date: October 31, 2024Inventors: Han Zhang, Jing Yu Koh, Jason Michael Baldridge, Yinfei Yang, Honglak Lee
-
Publication number: 20240362831Abstract: Provided is an information-processing device including: a CPU; and a memory storing instructions for causing the information-processing device, when executed by the CPU, to: output an intermediate heatmap for input of an input image by using at least one of a plurality of machine learning models; and generate a heatmap based on an attribute of the input image, which is provided independently of the input image, and the intermediate heatmaps.Type: ApplicationFiled: September 30, 2021Publication date: October 31, 2024Inventors: Hiya ROY, Mitsuru NAKAZAWA, Bjorn STENGER
-
Publication number: 20240362832Abstract: A refuse vehicle can include a camera and one or more processing circuits. The one or more processing circuits can detect a movement of the refuse vehicle, receive image data that includes a waste receptacle, determine a position of the waste receptacle relative to the refuse vehicle based on the image data, and generate a user interface that includes a visual indication of the position of the waste receptacle relative to the refuse vehicle.Type: ApplicationFiled: April 25, 2024Publication date: October 31, 2024Applicant: Oshkosh CorporationInventors: Leo Van Kampen, Vince Schad, Eric Codega, Jill King, Brian Brost
-
Publication number: 20240362833Abstract: A computer-implemented method for providing a three-dimensional (3D) results data set includes: acquiring projection maps of an object under examination which are captured from various projection directions by a medical X-ray device; providing an initial projection matrix based on a static model of the X-ray device; providing a further projection matrix by applying a trained function to input data, wherein the input data is based on the initial projection matrix and the projection maps, wherein at least one parameter of the trained function is adapted based on an image quality metric and/or a consistency metric, and wherein the further projection matrix is provided as output data of the trained function; and providing the 3D results data set through reconstruction from the projection maps by the further projection matrix.Type: ApplicationFiled: April 15, 2024Publication date: October 31, 2024Inventors: Oliver Hornung, Michael Manhart, Markus Kowarschik, Manuela Meier
-
Publication number: 20240362834Abstract: A center phase (image reconstruction phase) of reconstruction can be appropriately set even in a case in which motion correction is applied to reconstruct an image, to ensure a good image quality, and to support optimum phase search work by a doctor, an examination technician, or the like (hereinafter, referred to as an operator). An arithmetic operation unit that performs image reconstruction in an X-ray CT apparatus includes a motion correction reconstruction unit that detects motion information of a subject and performs motion correction reconstruction, and an image quality score calculation unit that calculates an image quality score for evaluating an image quality in one or a plurality of motion phases with respect to an image reconstructed by the motion correction reconstruction unit. An interface that can perform a phase search while comparing motion correction using the displayed image quality score is provided.Type: ApplicationFiled: April 23, 2024Publication date: October 31, 2024Inventors: Yusuke Tetsumura, Ryota Kohara
-
Publication number: 20240362835Abstract: Systems and methods reconstruction for a medical imaging system using a quasi-newton method. An unrolled iterative reconstruction process is used to reconstruct an image from the scan data. The unrolled iterative reconstruction process includes a plurality of cascades that include at least a data-consistency step and a regularization step. The data-consistency step is modified based at least in part on information of already calculated gradients of one or more previous cascades of the plurality of cascades using a quasi-newton computation.Type: ApplicationFiled: July 25, 2023Publication date: October 31, 2024Inventors: Simon Arberet, Marcel Dominik Nickel
-
Publication number: 20240362836Abstract: A method for gap filling geographic information service data includes determining a bounding region at or near an initial alignment. The method also includes determining the initial alignment within the bounding region. A corridor buffer is generated at or near the initial alignment, and within the bounding region. Cost layer data is processed. Incompleteness of a number of polygon-bounded areas is determined based on the cost layer data. Partial completeness of the polygon-bounded areas and completeness of the polygon-bounded areas are also determined based on the cost layer data. A synthetic completeness of the number of polygon-bounded areas is generated based on the incompleteness, the partial completeness, synthetic completeness, and completeness of the polygon-based areas. A rasterized cost map including the completeness and the synthetic completeness of the polygon-bounded areas is stored at the processor(s) and in memory (ies).Type: ApplicationFiled: April 24, 2024Publication date: October 31, 2024Inventors: Mohammed Ismaeel BABUR, Yury SOKOLOV, Dihan YANG
-
Publication number: 20240362837Abstract: A method for optimizing a search space associated with alignment curves includes determining, at a processor, a bounding region and an initial alignment. The initial alignment is within the bounding region. A corridor buffer that is associated with the initial alignment is generated at the processor. A rasterized cost map is generated at the processor based on the corridor buffer.Type: ApplicationFiled: April 24, 2024Publication date: October 31, 2024Inventors: Mohammed Ismaeel BABUR, Yury SOKOLOV, Dihan YANG
-
Publication number: 20240362838Abstract: A method for managing drawing data including raster data includes: vectorizing, by a computer, the drawing data to generate vector data; generating, by the computer, dimension line data associated with first and second nodes included in the vector data; performing, by the computer, character recognition on a corresponding region in the drawing data corresponding to a close region close to a dimension line represented by the dimension line data; storing, by the computer, a character obtained by the character recognition as a dimension value in association with the dimension line data; and calculating, by the computer, a number of pixels per 1 mm on a drawing represented by the drawing data using the dimension value.Type: ApplicationFiled: July 5, 2024Publication date: October 31, 2024Inventor: Yushiro KATO
-
Publication number: 20240362839Abstract: Described herein is a computer implemented method. The method includes receiving, via an input device, first user input drawing an input shape and generating, based on the first user input, original drawing data that includes an ordered set of points that define the input shape. The original drawing is processed to generate an input vector which also includes an ordered set of points. The input shape is then classified as a first template shape by processing the input vector using a machine learning model. A new shape is then generated based on the first template shape and the original drawing data.Type: ApplicationFiled: July 11, 2024Publication date: October 31, 2024Applicant: Canva Pty LtdInventors: Kevin Andrew WU WON, Kerry Jayne HALUPKA, Rowan James KATEKAR
-
Publication number: 20240362840Abstract: The Gantt chart generation device 1 includes: an acquisition unit to acquire plan information showing processes for carrying out a desired plan, resources, and a schedule for processing the processes, and change information showing quantities changing in time series in accordance with the processing; a Gantt chart generation unit to generate a Gantt chart showing, for each of the resources, the processes to be processed by the resources, and the schedule based on the plan information; a change image generation unit to generate change images showing, for each of the resources, the quantities changing in time series in accordance with the processing based on the change information; a superimposition unit to superimpose the change images on corresponding areas to the resources and the times on the Gantt chart; and a display control unit to display, on a display device, the Gantt chart on which the change images are superimposed.Type: ApplicationFiled: July 3, 2024Publication date: October 31, 2024Applicant: Mitsubishi Electric CorporationInventor: Ryo MATSUMURA
-
Publication number: 20240362841Abstract: A method for gap filling of geographic information service (“GIS”) data includes receiving a rasterized cost map including a first alignment curve between two locations. The method also includes determining a pixelization factoring data comprising a pixelization factoring value and pixelization factoring metadata. The method further includes generating a pixelized cost map based on the pixelization factoring data and the rasterized cost map. The method still further includes generating a second alignment curve between the two points in accordance with the pixelized cost map.Type: ApplicationFiled: April 24, 2024Publication date: October 31, 2024Inventors: Mohammed Ismaeel BABUR, Yury SOKOLOV, Dihan YANG
-
Publication number: 20240362842Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for utilizing a diffusion prior neural network for text guided digital image editing. For example, in one or more embodiments the disclosed systems utilize a text-image encoder to generate a base image embedding from the base digital image and an edit text embedding from edit text. Moreover, the disclosed systems utilize a diffusion prior neural network to generate a text-image embedding. In particular, the disclosed systems inject the base image embedding at a conceptual editing step of the diffusion prior neural network and condition a set of steps of the diffusion prior neural network after the conceptual editing step utilizing the edit text embedding. Furthermore, the disclosed systems utilize a diffusion neural network to create a modified digital image from the text-edited image embedding and the base image embedding.Type: ApplicationFiled: April 27, 2023Publication date: October 31, 2024Inventors: Hareesh Ravi, Sachin Kelkar, Midhun Harikumar, Ajinkya Gorakhnath Kale
-
Publication number: 20240362843Abstract: A method for providing an animated art experience to a user includes a user device receiving an image of an art piece selected by the user. The user device obtains information about the art piece. The user device presents a three-dimensional (3D) animated image that corresponds with the selected art image. Upon receiving an action by the user caused by a rotation or tilt of the user device, the user device provides a depth perspective view in correlation with the action and associated viewer angle of the art image such that further portions of the art image become visible. A background and a foreground of the image appear to move naturally as actions and associated viewer angles change.Type: ApplicationFiled: July 9, 2024Publication date: October 31, 2024Inventors: Thomas E. Holdman, James Gaskin, Brandon Crapo, Ho Yun Ki, Alan Knight
-
Publication number: 20240362844Abstract: This application relates to a facial expression processing method performed by a computer device, the method including: determining a skeletal structure of a three-dimensional facial model of a target style; skinning the skeletal structure to generate a virtual object face; binding an action unit in a style expression template matching the target style to a one or more corresponding bones in the skeletal structure; associating at least one expression control with at least one action unit in the style expression template; in response to an adjustment operation on the at least one expression control, driving bones bound to the at least one action unit to move based on information of the at least one action unit associated with the expression control; and controlling the virtual object face to generate an expression conforming to the target style in accordance with the movement of the bones triggered by the adjustment operation.Type: ApplicationFiled: July 8, 2024Publication date: October 31, 2024Inventor: Kai LIU
-
Publication number: 20240362845Abstract: This application provides a method and an apparatus for rendering an interaction picture, a device, a computer-readable storage medium, and a computer program product. The method includes: obtaining interaction data of at least two real characters performing interaction with each other in a target scene and motion-capture data of a first character in the at least two real characters, where the motion-capture data is configured for driving a virtual character corresponding to the first character to perform an interactive action consistent with that of the first character; and performing picture rendering based on the interaction data and the motion-capture data, to obtain a picture in which the virtual character performs interaction with a second character in the target scene, where the second character is a character other than the first character in the at least two real characters.Type: ApplicationFiled: July 11, 2024Publication date: October 31, 2024Inventor: Rui LI
-
Publication number: 20240362846Abstract: A pose data file may represent, for each frame of a reference frame sequence, a plurality of two-dimensional skeleton projections on a virtual spherical surface, each of which, for a particular frame, corresponds to a two-dimensional reference pose image of a three-dimensional skeleton of a first human from a viewing angle. A real-time two-dimensional skeleton detector module detects a two-dimensional skeleton of a second human in each received test frame of a test frame sequence. A pose matching module selects a particular two-dimensional skeleton projection of the first human with the minimum mathematical distance from the two-dimensional skeleton of the second human in the current test frame to match the current pose of the second human in the current test frame with a corresponding reference pose image of the pose data file. The particular two-dimensional skeleton projection represents the corresponding reference pose image at the viewing angle.Type: ApplicationFiled: July 10, 2024Publication date: October 31, 2024Inventors: Arash Azhand, Punjal Agarwal
-
Publication number: 20240362847Abstract: According to an aspect, there is provided a computer-implemented method of operating a visual data delivery system.Type: ApplicationFiled: July 15, 2022Publication date: October 31, 2024Inventors: Frank Michael Weber, Alexandra Groth, Harald Greiner, Jonathan Thomas Sutton, Balasundar Raju, Shyam Bharat, Peter Bingley
-
Publication number: 20240362848Abstract: Provided is a 3D rendering accelerator based on a DNN trained using a weight of the DNN using a plurality of 2D photos obtained by imaging the same object from several directions and then configured to perform 3D rendering using the same, the 3D rendering accelerator including a VPC configured to create an image plane for a 3D rendering target from a position and a direction of an observer, divide the image plane into a plurality of tile units, and then perform brain imitation visual recognition on the divided tile-unit images to determine to reduce a DNN inference range, an HNE including a plurality of NEs having different operational efficiencies and configured to accelerate DNN inference by dividing and allocating tasks, and a DNNA core configured to generate selection information for allocating each task to one of the plurality of NEs based on a sparsity ratio.Type: ApplicationFiled: April 8, 2024Publication date: October 31, 2024Applicant: Korea Advanced Institute of Science and TechnologyInventors: Hoi Jun YOO, Dong hyeon HAN