Patents Assigned to NAVER LABS CORPORATION
-
Patent number: 12175692Abstract: A localization method includes scanning a surrounding space by using laser outputted from a reference region; processing spatial information about the surrounding space based on a reflection signal of the laser; extracting feature vectors to which the spatial information has been reflected by using a deep learning network which uses space vectors including the spatial information as input data; and comparing the feature vectors with preset reference map data, and thus estimating location information about the reference region.Type: GrantFiled: July 25, 2022Date of Patent: December 24, 2024Assignee: NAVER LABS CORPORATIONInventors: Min Young Chang, Su Yong Yeon, Soo Hyun Ryu, Dong Hwan Lee
-
Publication number: 20240419977Abstract: Computer-implemented methods are included for training an autonomous machine to perform a target operation in a target environment. The methods include receiving a natural language description of the target operation and a natural language description of the target environment. The methods further include generating a prompt such as a reward and/or goal position signature by combining the natural language description of a target task or goal and the natural language description of the target environment. The methods then generate a reward or goal position function by prompting a large language model with the generated prompt. The methods further include computing a state description using a model of the target environment, and training a policy for the autonomous machine to perform the target task or goal using the generated function and state description.Type: ApplicationFiled: April 19, 2024Publication date: December 19, 2024Applicants: Naver Corporation, Naver Labs CorporationInventors: Julien Perez, Denys Proux, Claude Roux, Michaël Niemaz
-
Publication number: 20240412726Abstract: A method and system for text-based pose editing to generate a new pose from an initial pose and user-generated text includes an user input device for inputting the initial pose and the user-generated text; a variational auto-encoder configured to receive the initial pose; a text conditioning pipeline configured to receive the user-generated text; a fusing module configured to produce parameters for a prior Gaussian distribution Np; a pose decoder configured to sample the Gaussian distribution Np and generate, therefrom, the new pose; and an output device to communicate the generated new pose to a user. The variational auto-encoder and the text conditioning pipeline are trained using a PoseFix dataset, wherein the PoseFix dataset includes triplets having a source pose, a target pose, and text modifier.Type: ApplicationFiled: December 9, 2023Publication date: December 12, 2024Applicants: Naver Corporation, Naver Labs CorporationInventors: Ginger Delmas, Philippe Weinzaepfel, Francesc Moreno-noguer, Gregory Rogez
-
Publication number: 20240412041Abstract: A method and system for automatically building a three-dimensional pose dataset for use in text to pose retrieval or text to pose generation for a class of poses includes (a) inputting three-dimensional keypoint coordinates of class-centric poses; (b) extracting, from the inputted three-dimensional keypoint coordinates of class-centric poses, posecodes, the posecodes representing a relation between a specific set of joints; (c) selecting extracted posecodes to obtain a discriminative description; (d) aggregating selected posecodes that share semantic information; (e) converting the aggregated posecodes by electronically obtaining individual descriptions by plugging each posecode information into one template sentence, picked at random from a set of possible templates for a given posecode category; (f) concatenating the individual descriptions in random order, using random pre-defined transitions; and (g) mapping the concatenated individual descriptions to class-centric poses to create the three-dimensional poType: ApplicationFiled: October 3, 2023Publication date: December 12, 2024Applicants: Naver Corporation, Naver Labs CorporationInventors: Ginger Delmas, Philippe Weinzaepfel, Thomas Lucas, Francesc Moreno-noguer, Gregory Rogez
-
Publication number: 20240399572Abstract: A training system for a robot includes: a task solver module including primitive modules and a policy and configured to determine how to actuate the robot to solve input tasks; and a training module configured to: pre-train ones of the primitive modules for different actions, respectively, of the robot and the policy of the task solver module using asymmetric self play and a set of training tasks; and after the pre-training, train the task solver module using others of the primitive modules and tasks that are not included in the set of training tasks.Type: ApplicationFiled: June 5, 2023Publication date: December 5, 2024Applicants: NAVER CORPORATION, NAVER LABS CORPORATIONInventors: Paul JANSONNIE, Bingbing WU, Julien PEREZ
-
Publication number: 20240404104Abstract: A computer implemented method and system using an object-agnostic model for predicting a pose of an object in an image receives a query image having a target object therein; receives a set of reference images of the target object from different viewpoints; encodes, using a vision transformer, the received query image and the received set of reference images to generate a set of token features for the received query image and a set of token features for the received set of reference images; extracts, using a transformer decoder, information from the set of token features for the encoded reference images with respect to a set of token features for the received query image; processes, using a prediction head, the combined set of token features to generate a 2D-3D mapping and a confidence map of the query image; and processes the 2D-3D mapping and confidence map to determine the pose of the target object in the query image.Type: ApplicationFiled: April 25, 2024Publication date: December 5, 2024Applicant: Naver Labs CorporationInventors: Jérome Revaud, Romain Brégier, Yohann Cabon, Philippe Weinzaepfel, JongMin Lee
-
Publication number: 20240393791Abstract: A navigating robot includes: a feature module configured to detect objects in images captured by a camera of the navigating robot while the navigating robot is in a real world space; a mapping module configured to generate a map including locations of objects captured in the images and at least one attribute of the objects; and a navigation module trained to find and navigate to N different objects in the real world space in a predetermined order by: when a location of a next one of the N different objects in the predetermined order is stored in the map, navigate toward the next one of the N different objects in the real world space; and when the location of the next one of the N different objects in the predetermined order is not stored in the map, navigate to a portion of the map not yet captured in any images.Type: ApplicationFiled: May 26, 2023Publication date: November 28, 2024Applicants: NAVER CORPORATION, NAVER LABS CORPORATIONInventors: Assem SADEK, Guillaume BONO, Christian WOLF, Boris CHIDLOVSKII, Atilla BASKURT
-
Publication number: 20240346785Abstract: Provided is a method for providing augmented content, wherein whether a user terminal is located in a preset unit space is determined on the basis of the location of the user terminal, and if the user terminal is determined to be located in the unit space, an image captured by a camera is displayed through an augmented reality (AR) view by being augmented with content associated with the unit space.Type: ApplicationFiled: June 26, 2024Publication date: October 17, 2024Applicant: NAVER LABS CORPORATIONInventors: Jeanie JUNG, Sangwook KIM, Kihyun YU, Yeowon YOON
-
Patent number: 12117845Abstract: A method of operating a cloud server to control a robot providing a service in connection with a service application includes receiving an instruction to provide the service from the service application; and based on the received instruction, generating a plurality of sub-instructions by specifying the received instruction; and transmitting each sub-instruction, from among the plurality of sub-instructions, to the robot, wherein the transmitted sub-instructions are instructions for controlling the robot.Type: GrantFiled: October 29, 2021Date of Patent: October 15, 2024Assignee: NAVER LABS CORPORATIONInventors: Kay Park, Younghwan Yoon, Seung In Cha, Wooyoung Choi
-
Patent number: 12105508Abstract: A robot control method includes receiving, from a robot located in a space, location information of the robot; specifying, based on the received location information, at least one external camera from among a plurality of external cameras located in the space, the at least one external camera being located at a first location in the space, the first location corresponding to the received location information; receiving a robot image and at least one a space image, the robot image being an image obtained by a robot camera included in the robot and the at least one space image including at least one image obtained by the specified at least one external camera; and outputting the received robot image and at least one space image to a display unit.Type: GrantFiled: July 29, 2021Date of Patent: October 1, 2024Assignee: NAVER LABS CORPORATIONInventors: Kahyeon Kim, Seijin Cha
-
Publication number: 20240316764Abstract: In a mechanical system with an object having a set of two or more contacts, methods are disclosed for determining object motion by computing the tangential generalized reaction impulse and force. The methods includes receiving input values that include an initial generalized position and an initial generalized velocity of the object at time t, and computing a generalized force. The methods further includes computing, respectively for the reaction impulse and force: a generalized reaction impulse and force, and a tangential generalized reaction impulse and force. The methods use the tangential generalized reaction impulse and force to, respectively, compute an end-of-interval generalized position and velocity, and a generalized acceleration, which are output for use by an application.Type: ApplicationFiled: February 27, 2024Publication date: September 26, 2024Applicants: Naver Corporation, Naver Labs CorporationInventor: Christopher DANCE
-
Publication number: 20240310858Abstract: A method for controlling robots and facilities is performed by one or more processors and includes controlling a first robot to move to a waiting area of a target space, acquiring information that the first robot is located in the waiting area of the target space, determining whether there is a robot in the target space, controlling, in response to determining that there is no robot in the target space, a door of the target space to be opened, controlling, in response to determining that the door is open, the first robot to leave the waiting area and enter the target space, and controlling the first robot to provide a serving service in target space.Type: ApplicationFiled: May 22, 2024Publication date: September 19, 2024Applicant: NAVER LABS CORPORATIONInventors: Maria NOH, Yesook IM, Myeongsoo SHIN, Seoktae KIM
-
Publication number: 20240302846Abstract: A method for managing navigating robots in emergency situations is performed by one or more processors, and includes receiving a map of a target building, in which a no-stop area for emergency situations is defined in the map, receiving location information from the navigating robot, receiving an emergency situation occurrence signal, and in response to determining that the navigating robot is located in the no-stop area in emergency situations, controlling the navigating robot to move outside the no-stop area.Type: ApplicationFiled: May 21, 2024Publication date: September 12, 2024Applicant: NAVER LABS CORPORATIONInventors: Kahyeon KIM, Seoktae KIM
-
Patent number: 12072701Abstract: A robot control method includes receiving an image of a space where a robot drives; receiving driving information related to driving of the robot, from the robot; controlling a display unit to output a graphic object corresponding to the driving information, together with the image; and determining a visual characteristic of the graphic object based on the driving information, wherein the controlling includes controlling the display unit to output the graphic object in an overlapping manner with the image.Type: GrantFiled: August 27, 2021Date of Patent: August 27, 2024Assignee: NAVER LABS CORPORATIONInventors: Kahyeon Kim, Seijin Cha
-
Publication number: 20240257504Abstract: A semantic image segmentation (SIS) system includes: a semantic segmentation module trained to segment objects belonging to predetermined classes in input images using training images; and a learning module configured to selectively update at least one parameter of each of a localizer module, an encoder module, and a decoder module of the semantic segmentation module to identify objects having a new class that is not one of the predetermined classes: based on an image level class for a learning image including an object having the new class that is not one of the predetermined classes; and without a pixel-level annotation for the learning image.Type: ApplicationFiled: January 30, 2023Publication date: August 1, 2024Applicants: NAVER CORPORATION, NAVER LABS CORPORATIONInventors: Subhankar ROY, Riccardo VOLPI, Diane LARLUS, Gabriela CSURKA KHEDARI
-
Patent number: 12046003Abstract: A visual localization method includes generating a first feature point map by using first map data calculated on the basis of a first viewpoint; generating a second feature point map by using second map data calculated on the basis of a second viewpoint different from the first viewpoint; constructing map data for localization having the first and second feature point maps integrated with each other, by compensating for a position difference between a point of the first feature point map and a point of the second feature point map; and performing visual localization by using the map data for localization.Type: GrantFiled: November 12, 2021Date of Patent: July 23, 2024Assignee: NAVER LABS CORPORATIONInventors: Deok Hwa Kim, Dong Hwan Lee, Woo Young Kim, Tae Jae Lee
-
Publication number: 20240242357Abstract: A semantic image segmentation (SIS) system includes: a neural network module trained to generate semantic image segmentation maps based on input images, the semantic image segmentation maps grouping pixels of the input images under respective class labels, respectively; a minimum entropy module configured to, at a first time, determine first minimum entropies of pixels, respectively, in the semantic image segmentation maps generated for a received image and N images received before the received image, where N is an integer greater than or equal to 1; and an adaptation module configured to selectively adjust parameters of the neural network module based on optimization of a loss function that minimizes the first minimum entropies.Type: ApplicationFiled: January 16, 2023Publication date: July 18, 2024Applicants: NAVER CORPORATION, NAVER LABS CORPORATIONInventors: Riccardo VOLPI, Gabriela CSURKA KHEDARI, Diane LARLUS
-
Patent number: 12025708Abstract: The present invention relates to a method and system of generating a three-dimensional (3D) map. The 3D map generating method of the present invention includes: collecting spatial data and image data on a specific space by using a lidar sensor and a camera sensor that are each provided on a collecting device; estimating a movement trajectory of the lidar sensor by using the spatial data; and generating 3D structure data on the specific space based on structure-from-motion (SFM), by using the image data and the movement trajectory as input data.Type: GrantFiled: March 2, 2022Date of Patent: July 2, 2024Assignee: NAVER LABS CORPORATIONInventors: Yong Han Lee, Su Yong Yeon, Soo Hyun Ryu, Deok Hwa Kim, Dong Hwan Lee
-
Patent number: 12017531Abstract: A travel information notification method includes generating an augmented reality object composed of a plurality of lines that indicate a virtual trajectory corresponding to at least a portion of a predicted travel path of a vehicle; providing the generated augmented reality object on a head-up display of the vehicle so that the plurality of lines display the virtual trajectory in association with the road on which the vehicle is traveling; and providing travel information about the vehicle by controlling at least one among the spacing, colors, or shapes of the plurality of lines, displayed as the augmented reality object on the head-up display, according to travel conditions of the vehicle.Type: GrantFiled: April 21, 2022Date of Patent: June 25, 2024Assignee: NAVER LABS CORPORATIONInventor: Hakseung Choi
-
Patent number: 12001205Abstract: A robot remote control method and system capable of remotely controlling navigation of a robot. The robot remote control method includes outputting both a map image and an ambient image to a display, the map image including location information corresponding to the ambient image, the ambient image being of surroundings of a robot, and the ambient image being received from a camera at the robot, generating a control command for controlling the robot in response to an input to the display during the outputting, and causing the robot to drive according to the control command by transmitting the control command to the robot.Type: GrantFiled: July 21, 2021Date of Patent: June 4, 2024Assignee: NAVER LABS CorporationInventor: Kahyeon Kim