Patents Assigned to NAVER LABS CORPORATION
  • Patent number: 11620755
    Abstract: The trajectory tracking method for a mobile electronic device may include tracking a trajectory of the electronic device by using results of pose estimation using odometry and results of pose estimation using visual localization (VL) as camera pose information.
    Type: Grant
    Filed: November 12, 2020
    Date of Patent: April 4, 2023
    Assignee: NAVER LABS CORPORATION
    Inventors: Donghwan Lee, Deokhwa Kim
  • Patent number: 11610373
    Abstract: The method of generating three-dimensional model data of an object includes estimating pose data of the object based on the first view data of the camera and first relative pose data of the object, and estimating second relative pose data of the object based on the pose data of the object and the second view data of the camera.
    Type: Grant
    Filed: November 11, 2020
    Date of Patent: March 21, 2023
    Assignee: NAVER LABS CORPORATION
    Inventors: Yeong Ho Jeong, Dong Cheol Hur, Sang Wook Kim
  • Patent number: 11600017
    Abstract: A pose estimation training system includes: a first model configured to generate a first 6 degrees of freedom (DoF) pose of a first camera that captured a first image from a first domain; a second model configured to generate a second 6 DoF pose of a second camera that captured a second image from a second domain, where the second domain is different than the first domain; a discriminator module configured to, based on first and second outputs from the first and second encoder modules, generate a discriminator output indicative of whether the first and second images are from the same domain; and a training control module configured to, based on the discriminator output, selectively adjust at least one weight value shared by the first model and the second model.
    Type: Grant
    Filed: April 29, 2020
    Date of Patent: March 7, 2023
    Assignees: NAVER CORPORATION, NAVER LABS CORPORATION
    Inventor: Boris Chidlovskii
  • Patent number: 11571973
    Abstract: A method for controlling an electric hand truck includes determining whether user manipulation is present due to a user input on the electric hand truck; and, if there is no user manipulation, then braking an electric motor that drives the wheels of the electric moving vehicle in a softlock manner in which, instead of power being applied to the electric motor, electrodes of the electric motor are short-circuited.
    Type: Grant
    Filed: February 14, 2020
    Date of Patent: February 7, 2023
    Assignee: NAVER LABS CORPORATION
    Inventors: Jinwon Chung, Donghun Yu, Joonho Seo, Sangok Seok
  • Patent number: 11575268
    Abstract: A charging pad according to the exemplary embodiment of the present invention may include an input terminal connected to a high-priority charging pad; an output terminal connected to a low-priority charging pad; a charging unit configured to charge a robot positioned on the charging pad in accordance with a preset operating state; and a control unit configured to switch an operating state of the charging unit from an operation stop state to an operation standby state when a status signal for the high-priority charging pad is received from the input terminal. The control unit is configured to output the status signal for the charging pad through the output terminal when occupation of the charging unit by the robot is detected in the operation standby state.
    Type: Grant
    Filed: November 5, 2020
    Date of Patent: February 7, 2023
    Assignee: NAVER LABS CORPORATION
    Inventors: Kay Park, Minsu Kim, Jaehun Han, Joonho Seo
  • Publication number: 20230015984
    Abstract: A system for generating whole body poses includes: a body regression module configured to generate a first pose of a body of an animal in an input image by regressing from a stored body anchor pose; a face regression module configured to generate a second pose of a face of the animal in the input image by regressing from a stored face anchor pose; an extremity regression module configured to generate a third pose of an extremity of the animal in the input image by regressing from a stored extremity anchor pose; and a pose module configured to generate a whole body pose of the animal in the input image based on the first pose, the second pose, and the third pose.
    Type: Application
    Filed: September 13, 2022
    Publication date: January 19, 2023
    Applicants: NAVER CORPORATION, NAVER LABS CORPORATION
    Inventors: Philippe WEINZAEPFEL, Romain Bregier, Hadrien Combaluzier, Vincent Leroy, Gregory Rogez
  • Publication number: 20220402122
    Abstract: A robot system includes a selection module configured to select a stored demonstration for a robot from a database of stored demonstrations for different tasks of the robot; an encoder module of an attention model, the encoder module configured to determine a similarity value reflecting a similarity between: a user input demonstration for the robot; and the stored demonstration for the robot; and an indicator module configured to indicate whether the stored demonstration is the same as the user input demonstration and belongs to the same task based on the similarity value.
    Type: Application
    Filed: June 18, 2021
    Publication date: December 22, 2022
    Applicants: NAVER LABS CORPORATION, NAVER CORPORATION
    Inventors: Julien PEREZ, Theo CACHET
  • Publication number: 20220402125
    Abstract: Method for determining a grasping hand model suitable for grasping an object by receiving an image including at least one object; obtaining an object model estimating a pose and shape of the object from the image of the object; selecting a grasp class from a set of grasp classes by means of a neural network, with a cross entropy loss, thus, obtaining a set of parameters defining a coarse grasping hand model; refining the coarse grasping hand model, by minimizing loss functions referring to the parameters of the hand model for obtaining an operable grasping hand model while minimizing the distance between the finger of the hand model and the surface of the object and preventing interpenetration; and obtaining a mesh of the hand represented by the enhanced set of parameters.
    Type: Application
    Filed: June 6, 2022
    Publication date: December 22, 2022
    Applicant: Naver Labs Corporation
    Inventors: Francesc Moreno Noguer, Guillem Alenyà Ribas, Enric Corona Puyane, Albert Pumarola Peris, Grégory Rogez
  • Publication number: 20220395975
    Abstract: A computer-implemented method for performing few-shot imitation is disclosed. The method comprises obtaining at least one set of training data, wherein each set of training data is associated with a task and comprises (i) one of samples of rewards and a reward function, (ii) one of samples of state transitions and a transition distribution, and (iii) a set of first demonstrations, training a policy network embodied in an agent using reinforcement learning by inputting at least one set of first demonstrations of the at least one set of training data into the policy network, and by maximizing a risk measure or an average return over the at least one set of first demonstrations of the at least one set of training data based on respective one or more reward functions or respective samples of rewards, obtaining a set of second demonstrations associated with a new task, and inputting the set of second demonstrations and an observation of a state into the trained policy network for performing the new task.
    Type: Application
    Filed: April 8, 2022
    Publication date: December 15, 2022
    Applicants: NAVER CORPORATION, NAVER LABS CORPORATION
    Inventors: Theo CACHET, Christopher DANCE, Julien PEREZ
  • Patent number: 11514645
    Abstract: Electronic devices and/or operating methods of the electronic device to provide visual localization based on outdoor three-dimensional (3D) map information may be provided. Such electronic devices may be configured to acquire two-dimensional (2D) image information about an outdoor environment, generate 3D map information about the outdoor environment based on the 2D image information, and determine a position of a point in the 3D map information corresponding to a query image.
    Type: Grant
    Filed: February 3, 2021
    Date of Patent: November 29, 2022
    Assignees: NAVER CORPORATION, NAVER LABS CORPORATION
    Inventors: Deokhwa Kim, Donghwan Lee
  • Patent number: 11494932
    Abstract: A system for generating whole body poses includes: a body regression module configured to generate a first pose of a body of an animal in an input image by regressing from a stored body anchor pose; a face regression module configured to generate a second pose of a face of the animal in the input image by regressing from a stored face anchor pose; an extremity regression module configured to generate a third pose of an extremity of the animal in the input image by regressing from a stored extremity anchor pose; and a pose module configured to generate a whole body pose of the animal in the input image based on the first pose, the second pose, and the third pose.
    Type: Grant
    Filed: June 2, 2020
    Date of Patent: November 8, 2022
    Assignees: NAVER CORPORATION, NAVER LABS CORPORATION
    Inventors: Philippe Weinzaepfel, Romain Bregier, Hadrien Combaluzier, Vincent Leroy, Gregory Rogez
  • Patent number: 11482009
    Abstract: A method for generating depth information of a street view image using a two-dimensional (2D) image includes calculating distance information of an object on a 2D map using the 2D map corresponding to a street view image; extracting semantic information on the object from the street view image; and generating depth information of the street view image based on the distance information and the semantic information.
    Type: Grant
    Filed: August 10, 2020
    Date of Patent: October 25, 2022
    Assignee: NAVER LABS CORPORATION
    Inventors: Donghwan Lee, Deokhwa Kim
  • Publication number: 20220305649
    Abstract: A system includes: a first module configured to, based on a set of target robot joint angles, generate a first estimated end effector pose and a first estimated latent variable that is a first intermediate variable between the set of target robot joint angles and the first estimated end effector pose; a second module configured to determine a set of estimated robot joint angles based on the first estimated latent variable and a target end effector pose; a third module configured to determine joint probabilities for the robot based on the first estimated latent variable and the target end effector pose; and a fourth module configured to, based on the set of estimated robot joint angles, determine a second estimated end effector pose and a second estimated latent variable that is a second intermediate variable between the set of estimated robot joint angles and the second estimated end effector pose.
    Type: Application
    Filed: September 28, 2021
    Publication date: September 29, 2022
    Applicants: NAVER CORPORATION, NAVER LABS CORPORATION
    Inventors: Julien PEREZ, Seungsu KIM
  • Patent number: 11454978
    Abstract: A training system for training a trained model for use by a navigating robot to perform visual navigation includes memory including N base virtual training environments, each of the N base virtual training environments including a field of view at a location within an indoor space, where N is an integer greater than 1. A randomization module is configured to generate N varied virtual training environments based on the N base virtual training environments, respectively, by varying at least one characteristic of the respective N base virtual training environments. A training module is configured to train the trained model for use by the navigating robot to perform visual navigation based on a training set including: the N base virtual training environments; and the N varied virtual training environments.
    Type: Grant
    Filed: November 7, 2019
    Date of Patent: September 27, 2022
    Assignees: NAVER CORPORATION, NAVER LABS CORPORATION
    Inventors: Tomi Silander, Michel Aractingi, Christopher Dance, Julien Perez
  • Patent number: 11373096
    Abstract: A method of training a predictor to predict a location of a computing device in an indoor environment incudes: receiving training data including strength of signals received from wireless access points at positions of an indoor environment, where the training data includes: a subset of labeled data including signal strength values and location labels; and a subset of unlabeled data including signal strength values and not including labels indicative of locations; training a variational autoencoder to minimize a reconstruction loss of the signal strength values of the training data, where the variational autoencoder includes encoder neural networks and decoder neural networks; and training a classification neural network to minimize a prediction loss on the labeled data, where the classification neural network generates a predicted location based on the latent variable, and where the encoder neural networks and the classification neural network form the predictor.
    Type: Grant
    Filed: June 18, 2020
    Date of Patent: June 28, 2022
    Assignees: NAVER CORPORATION, NAVER LABS CORPORATION
    Inventors: Boris Chidlovskii, Leonid Antsfeld
  • Patent number: 11373329
    Abstract: Provided is a method, performed by a computing device communicating with a server, of generating 3-dimensional (3D) mode data. The method includes: capturing, by a camera, an image of a target object at a first time point and storing first pose data of the camera at this time; generating a second image by capturing, by the camera, the target object at a second time point and generating second pose data of the camera at this time; calculating a distance between the camera at the second time point and the target object, based on the first pose data and second pose data of the camera; generating pose data of the target object, based on the distance and the second pose data of the camera; and estimating second relative pose data of the target object, based on the second pose data of the camera and the pose data of the target object.
    Type: Grant
    Filed: November 11, 2020
    Date of Patent: June 28, 2022
    Assignee: NAVER LABS CORPORATION
    Inventors: Yeong Ho Jeong, Dong Cheol Hur, Sang Wook Kim
  • Publication number: 20220198225
    Abstract: A method of determining an action of a device for a given situation, implemented by a computer system, includes for a learning model that learns a distribution of rewards according to the action of the device for the situation using a risk-measure parameter associated with control of the device, selectively setting a value of the risk-measure parameter in accordance with an environment in which the device is controlled; and determining the action of the device for the given situation when controlling the device in the environment, based on the set value of the risk-measure parameter.
    Type: Application
    Filed: November 4, 2021
    Publication date: June 23, 2022
    Applicants: NAVER CORPORATION, NAVER LABS CORPORATION
    Inventors: Jinyoung CHOI, Christopher Roger DANCE, Jung-eun KIM, Seulbin HWANG, Kay PARK
  • Patent number: D958154
    Type: Grant
    Filed: April 23, 2020
    Date of Patent: July 19, 2022
    Assignee: Naver Labs Corporation
    Inventor: Hakseung Choi
  • Patent number: D960175
    Type: Grant
    Filed: April 23, 2020
    Date of Patent: August 9, 2022
    Assignee: Naver Labs Corporation
    Inventor: Hakseung Choi
  • Patent number: D974436
    Type: Grant
    Filed: June 16, 2021
    Date of Patent: January 3, 2023
    Assignee: Naver Labs Corporation
    Inventors: Min-Kyo Im, Kyumin Ha