Patents Assigned to NAVER Corporation
-
Patent number: 12214495Abstract: The present invention relates to methods for learning a robot task and robots systems using the same. A robot system may include a robot configured to perform a task, and detect force information related to the task, a haptic controller configured to be manipulatable for teaching the robot, the haptic controller configured to output a haptic feedback based on the force information while teaching of the task to the robot is performed, a sensor configured to sense first information related to a task environment of the robot and second information related to a driving state of the robot, while the teaching is performed by the haptic controller for outputting the haptic feedback, and a computer configured to learn a motion of the robot related to the task, by using the first information and the second information, such that the robot autonomously performs the task.Type: GrantFiled: July 22, 2021Date of Patent: February 4, 2025Assignee: NAVER CorporationInventors: Keunjun Choi, Hyoshin Myung, Mihyun Ko, Jinwon Pyo, Jaesung Oh, Taeyoon Lee, Changwoo Park, Hoji Lee, SungPyo Lee
-
Patent number: 12216667Abstract: A method for ranking a set of objects includes: receiving the set of objects to rank, a relevance score for each object, and a set of objective functions; based on the relevance scores for the objects, defining a decision space having n decision variables using a polytope, where n is the number of objects to rank and where vertices of the polytope represent permutations of exposures provided to the objects in the set by corresponding rankings; determining a Pareto-set for the set of objective functions; based on a Pareto-optimal point in the Pareto-set, determining a distribution over rankings for the objects in the set using the decision space, where a proportion is associated with each ranking in the distribution; selecting a sequence of rankings for the objects in the set based on the distribution in accordance with their proportions; and outputting the selected sequence of rankings of the objects.Type: GrantFiled: December 19, 2022Date of Patent: February 4, 2025Assignee: NAVER CORPORATIONInventors: Till Kletti, Jean-Michel Renders
-
Patent number: 12217484Abstract: A method of jointly training of a transferable feature extractor network, an ordinal regressor network, and an order classifier network in an ordinal regression unsupervised domain adaption network by providing a source of labeled source images and unlabeled target images; outputting image representations from a transferable feature extractor network by performing a minimax optimization procedure on the source of labeled source images and unlabeled target images; training a domain discriminator network, using the image representations from the transferable feature extractor network, to distinguish between source images and target images; training an ordinal regressor network using a full set of source images from the transferable feature extractor network; and training an order classifier network using a full set of source images from said transferable feature extractor network.Type: GrantFiled: May 5, 2022Date of Patent: February 4, 2025Assignee: Naver CorporationInventors: Boris Chidlovskii, Assem Sadek
-
Patent number: 12219203Abstract: A method for implementing a seamless switching mode between channels in a multi-stream live transmission environment includes receiving in a single stream, a composite image in which images of multiple channels are synthesized into a single image, composing a view mode having a layout including the images of the multiple channels using the composite image, and changing a layout of the view mode using the composite image.Type: GrantFiled: December 21, 2021Date of Patent: February 4, 2025Assignee: NAVER CORPORATIONInventors: Joon-kee Chang, SungHo Kim, Hyesung No, Yun Ho Jung, Jinhoon Kim, Yeong Jin Jeong, Jeongki Kim, In Cheol Kang, Jonghyeok Lee, JaeChul Ahn, SungTaek Cho
-
Publication number: 20250037296Abstract: A training system includes: a pose module configured to: receive an image captured using a camera; and determine a 6 degree of freedom (DoF) pose of the camera; and a training module configured to: input training images to the pose module from a training dataset; and train a segmentation module of the pose module by alternating between: updating a target distribution with parameters of the segmentation module fixed based on minimizing a first loss determined based on a label distribution determined based on prototype distributions determined by the pose module based on input of ones of the training images; updating the parameters of the segmentation module with the target distribution fixed based on minimizing a second loss determined based on a second loss that is different than the first loss; and updating the parameters of the segmentation module based on a ranking loss using a global representation.Type: ApplicationFiled: December 15, 2023Publication date: January 30, 2025Applicants: NAVER CORPORATION, NAVER LABS CORPORATIONInventors: Maxime PIETRANTONI, Gabriela Csurka Khedari, Martin Humenberger, Torsten Sattler
-
Publication number: 20250028338Abstract: A method of monitoring a robot operation according to some example embodiments, includes receiving robot information from each of the robots through communication with the robots located in a building where the robots provide services, and displaying, on a display unit, a monitoring screen configured to monitor an operational situation of the robots located in the building. The monitoring screen including a building graphic object representing the building, and a state graphic object positioned around the building graphic object, the state graphic object representing state information on a robot located on each of a plurality of floors included in the building, the state information on the robot and a visual appearance of the state graphic object determined based on the robot information received from each of the robots.Type: ApplicationFiled: October 7, 2024Publication date: January 23, 2025Applicant: NAVER CORPORATIONInventors: Hak Seung CHOI, Ka Hyeon KIM
-
Patent number: 12197883Abstract: Provided is a method for augmented reality-based image translation performed by one or more processors, which includes storing a plurality of frames representing a video captured by a camera, extracting a first frame that satisfies a predetermined criterion from the stored plurality of frames, translating a first language sentence (or group of words) included in the first frame into a second language sentence (or group of words), determining a translation region including the second language sentence (or group of words) included in the first frame, and rendering the translation region in a second frame.Type: GrantFiled: October 13, 2022Date of Patent: January 14, 2025Assignee: NAVER CORPORATIONInventors: Chankyu Choi, Seungjae Kim, Juhyeok Mun, Jeong Tae Lee, Youngbin Ro
-
METHOD, COMPUTER SYSTEM, AND NON-TRANSITORY COMPUTER-READABLE RECORD MEDIUM FOR LEARNING ROBOT SKILL
Publication number: 20250010466Abstract: Example embodiments are directed to a robotic painting method performed using a computer system. The computer system comprises at least one processor configured to execute computer-readable instructions included in a memory. The robotic painting method include collecting, by the at least one processor, stroke-level action data for a painting action; and learning, by the at least one processor, a stroke-level robotic painting skill using the stroke-level action data.Type: ApplicationFiled: September 13, 2024Publication date: January 9, 2025Applicant: NAVER CORPORATIONInventors: Taeyoon LEE, Choongin LEE, Changwoo PARK, Keunjun CHOI, Donghyeon SEONG, Gyeongyeon CHOI, Younghyo PARK, Seunghun JEON -
Publication number: 20240419977Abstract: Computer-implemented methods are included for training an autonomous machine to perform a target operation in a target environment. The methods include receiving a natural language description of the target operation and a natural language description of the target environment. The methods further include generating a prompt such as a reward and/or goal position signature by combining the natural language description of a target task or goal and the natural language description of the target environment. The methods then generate a reward or goal position function by prompting a large language model with the generated prompt. The methods further include computing a state description using a model of the target environment, and training a policy for the autonomous machine to perform the target task or goal using the generated function and state description.Type: ApplicationFiled: April 19, 2024Publication date: December 19, 2024Applicants: Naver Corporation, Naver Labs CorporationInventors: Julien Perez, Denys Proux, Claude Roux, Michaël Niemaz
-
Publication number: 20240408756Abstract: A learning system for a navigating robot includes: a navigation module including: a first policy configured to determine actions for moving the navigating robot and navigating from a starting location to an ending location based on images from a camera of the navigating robot; and a second policy configured to, based on a representation of an environment generated by the navigating robot, determine actions for moving the navigating robot from waypoint locations between the starting location and the ending location to a plurality of subgoal locations without any images from the camera; and a representation module configured to: selectively learn the representation during movement via the first policy based on the representation at previous times, images from the camera, and actions determined by the first policy at previous times; and provide the representation to the second policy.Type: ApplicationFiled: April 12, 2024Publication date: December 12, 2024Applicant: Naver CorporationInventors: Guillaume BONO, Leonid Antsfeld, Gianluca Monaci, Assem Sadek, Christian Wolf
-
Publication number: 20240412041Abstract: A method and system for automatically building a three-dimensional pose dataset for use in text to pose retrieval or text to pose generation for a class of poses includes (a) inputting three-dimensional keypoint coordinates of class-centric poses; (b) extracting, from the inputted three-dimensional keypoint coordinates of class-centric poses, posecodes, the posecodes representing a relation between a specific set of joints; (c) selecting extracted posecodes to obtain a discriminative description; (d) aggregating selected posecodes that share semantic information; (e) converting the aggregated posecodes by electronically obtaining individual descriptions by plugging each posecode information into one template sentence, picked at random from a set of possible templates for a given posecode category; (f) concatenating the individual descriptions in random order, using random pre-defined transitions; and (g) mapping the concatenated individual descriptions to class-centric poses to create the three-dimensional poType: ApplicationFiled: October 3, 2023Publication date: December 12, 2024Applicants: Naver Corporation, Naver Labs CorporationInventors: Ginger Delmas, Philippe Weinzaepfel, Thomas Lucas, Francesc Moreno-noguer, Gregory Rogez
-
Publication number: 20240412726Abstract: A method and system for text-based pose editing to generate a new pose from an initial pose and user-generated text includes an user input device for inputting the initial pose and the user-generated text; a variational auto-encoder configured to receive the initial pose; a text conditioning pipeline configured to receive the user-generated text; a fusing module configured to produce parameters for a prior Gaussian distribution Np; a pose decoder configured to sample the Gaussian distribution Np and generate, therefrom, the new pose; and an output device to communicate the generated new pose to a user. The variational auto-encoder and the text conditioning pipeline are trained using a PoseFix dataset, wherein the PoseFix dataset includes triplets having a source pose, a target pose, and text modifier.Type: ApplicationFiled: December 9, 2023Publication date: December 12, 2024Applicants: Naver Corporation, Naver Labs CorporationInventors: Ginger Delmas, Philippe Weinzaepfel, Francesc Moreno-noguer, Gregory Rogez
-
Patent number: 12165401Abstract: A computer-implemented method of recognition of actions performed by individuals includes: by one or more processors, obtaining images including at least a portion of an individual; by the one or more processors, based on the images, generating implicit representations of poses of the individual in the images; and by the one or more processors, determining an action performed by the individual and captured in the images by classifying the implicit representations of the poses of the individual.Type: GrantFiled: March 13, 2023Date of Patent: December 10, 2024Assignee: NAVER CORPORATIONInventors: Philippe Weinzaepfel, Gregory Rogez
-
Patent number: 12164559Abstract: Am image retrieval system includes: a neural network (NN) module configured to generate local features based on an input image; an iterative attention module configured to, via T iterations, generate an ordered set of super features in the input image based on the local features, where T is an integer greater than 1; and a selection module configured to select a second image from a plurality of images in an image database based on the second image having a second ordered set of super features that most closely match the ordered set of super features in the input image, where the super features in the set of super features do not include redundant local features of the input image.Type: GrantFiled: January 21, 2022Date of Patent: December 10, 2024Assignee: NAVER CORPORATIONInventors: Philippe Weinzaepfel, Thomas Lucas, Diane Larlus, Ioannis Kalantidis
-
Publication number: 20240399572Abstract: A training system for a robot includes: a task solver module including primitive modules and a policy and configured to determine how to actuate the robot to solve input tasks; and a training module configured to: pre-train ones of the primitive modules for different actions, respectively, of the robot and the policy of the task solver module using asymmetric self play and a set of training tasks; and after the pre-training, train the task solver module using others of the primitive modules and tasks that are not included in the set of training tasks.Type: ApplicationFiled: June 5, 2023Publication date: December 5, 2024Applicants: NAVER CORPORATION, NAVER LABS CORPORATIONInventors: Paul JANSONNIE, Bingbing WU, Julien PEREZ
-
Patent number: 12158593Abstract: A vehicular three-dimensional head-up display includes a display device functioning as a light source; and a combiner for simultaneously reflecting light from the light source toward a driver's seat and transmitting light from the outside of a vehicle, and may include an optical configuration in which an image created by the light from the light source is displayed as a virtual image of a three-dimensional perspective laid to correspond to the ground in front of the vehicle.Type: GrantFiled: March 12, 2021Date of Patent: December 3, 2024Assignee: NAVER CORPORATIONInventors: Eunyoung Jeong, Jae Won Cha
-
Patent number: 12159459Abstract: A method for extracting a fingerprint of a video includes calculating 2D discrete cosine transform (DCT) coefficients from each of the plurality of frames of the video, extracting, from the 2D DCT coefficients, a coefficient having a basis satisfying at least one of up-down symmetry or left-right symmetry, and calculating a fingerprint of the video based on the extracted coefficient.Type: GrantFiled: August 17, 2022Date of Patent: December 3, 2024Assignee: NAVER CORPORATIONInventors: Seong Hyeon Shin, Jisu Jeon, Dae Hwang Kim, Jongeun Park, Seung-bin Kim
-
Publication number: 20240393791Abstract: A navigating robot includes: a feature module configured to detect objects in images captured by a camera of the navigating robot while the navigating robot is in a real world space; a mapping module configured to generate a map including locations of objects captured in the images and at least one attribute of the objects; and a navigation module trained to find and navigate to N different objects in the real world space in a predetermined order by: when a location of a next one of the N different objects in the predetermined order is stored in the map, navigate toward the next one of the N different objects in the real world space; and when the location of the next one of the N different objects in the predetermined order is not stored in the map, navigate to a portion of the map not yet captured in any images.Type: ApplicationFiled: May 26, 2023Publication date: November 28, 2024Applicants: NAVER CORPORATION, NAVER LABS CORPORATIONInventors: Assem SADEK, Guillaume BONO, Christian WOLF, Boris CHIDLOVSKII, Atilla BASKURT
-
Patent number: 12151374Abstract: A system includes: a first module configured to, based on a set of target robot joint angles, generate a first estimated end effector pose and a first estimated latent variable that is a first intermediate variable between the set of target robot joint angles and the first estimated end effector pose; a second module configured to determine a set of estimated robot joint angles based on the first estimated latent variable and a target end effector pose; a third module configured to determine joint probabilities for the robot based on the first estimated latent variable and the target end effector pose; and a fourth module configured to, based on the set of estimated robot joint angles, determine a second estimated end effector pose and a second estimated latent variable that is a second intermediate variable between the set of estimated robot joint angles and the second estimated end effector pose.Type: GrantFiled: September 28, 2021Date of Patent: November 26, 2024Assignee: NAVER CORPORATIONInventors: Julien Perez, Seungsu Kim
-
Patent number: 12154227Abstract: A system includes: a feature module configured to generate a feature map based on a single image taken from a point of view (POV) including a human based on features of the human visible in the image and non-visible features of the human; a pixel features module configured to generate pixel features based on the feature map and a target POV; a feature mesh module configured to generate a feature mesh for the human based on the feature map; a geometry module configured to: generate voxel features based on the feature mesh; and generate a density value based on the voxel and pixel features; a texture module configured to generate RGB colors for pixels based on the density value and the pixel features; and a rendering module configured to generate a three dimensional rendering of the human from the target POV based on the RGB colors and the density value.Type: GrantFiled: December 16, 2022Date of Patent: November 26, 2024Assignees: NAVER CORPORATION, SEOUL NATIONAL UNIVERSITY R&DB FOUNDATIONInventors: Hongsuk Choi, Gyeongsik Moon, Vincent Leroy, KyoungMu Lee, Grégory Rogez