Robotics Patents (Class 382/153)
-
Patent number: 11205296Abstract: Rendering multi-dimensional data in a polyhedron, such as a cube, in a 3D environment allows for the ease of visualization of, and interaction with, the underlying data. The approaches herein allow for 3D manipulation of records within the underlying data by filtering the records across elements of a particular dimension. These records may be filtered by physically grabbing a ‘drawer’—a slice of data within the polyhedron—and removing the drawer to a separate space. That drawer then represents all of the underlying records filtered by that slice, and can be further manipulated by additional filtering, or by merging multiple drawers together.Type: GrantFiled: December 20, 2019Date of Patent: December 21, 2021Assignee: SAP SEInventors: Christian Grail, Joachim Fiess, Tatjana Borovikov, Judith Schneider, Manfred Johann Pauli, Gisbert Loff, Hanswerner Dreissigacker, Klaus Herter, Hans-Juergen Richstein, Ian Robert Taylor
-
Patent number: 11191412Abstract: Disclosed is control of a floor treatment machine and treating floor surfaces, including a planning mode, allowing at least two zones to be defined on a floor surface to be treated, a first node to be defined in each zone and a connection path from at least one zone to a first node of another zone to be defined. At least one region of the zone edge can be input as a virtual obstacle, a direct connection between the first node of the defined zone and the first node of another zone being interrupted by the input region on the basis of real and virtual obstacles. After the complete treatment of a zone, the virtual obstacle on the zone edge is canceled and a switch to another zone is performed. Thus, zones distributed in any way can be treated in succession without the intervention of an operating person.Type: GrantFiled: September 14, 2017Date of Patent: December 7, 2021Assignee: Cleanfix Reinigungssysteme AGInventors: Pierre Lamon, Roland Flück
-
Patent number: 11195297Abstract: A method and a system for visual localization based on dual dome cameras is based on two synchronized panoramic video streams output by dual dome cameras to solve the problem of fewer feature points and tracking failure, thereby achieving stable visual SLAM tracking. The depth information of the scene is restored via the two panoramic video streams based on the principle of triangulation measurement. The positions and postures of the dual dome cameras are calculated based on a principle of binocular vision based SLAM, so that accurate map information are obtained finally by evaluating the positions and postures of the dual dome cameras corresponding to the key frames and the depth information in the key frames. The disclosure makes up for inaccurate and incomplete depth information of the scenes in passive scene recoveries, which is suitable for vehicle and robot positioning, obstacle detection and free space estimation.Type: GrantFiled: May 22, 2020Date of Patent: December 7, 2021Assignees: CHINA-GERMANY(ZHUHAI)ARTIFICIAL INTELLIGENCE INSTITUTE CO., LTD, ZHUHAI 4DAGE TECHNOLOGY CO., LTD.Inventor: Yan Cui
-
Patent number: 11175148Abstract: Described herein are systems and methods that involve abnormality detection and a carefully designed state machine that assesses whether mapping, such as simultaneous localization and mapping (SLAM) processing, should be skipped for the current image frames, whether relocalization may performed, or whether SLAM processing may be performed. Thus, embodiments allow mapping processing to timely and smoothly switch between different tracking states, and thereby prevent bad tracking status to occur.Type: GrantFiled: August 13, 2018Date of Patent: November 16, 2021Assignee: Baidu USA LLCInventors: Yingze Bao, Mingyu Chen
-
Patent number: 11169599Abstract: It is preferable for a user to experience sensation corresponding to the sensation actually experienced by the user or another user. There is provided an information processing apparatus including a data acquisition unit configured to acquire relevance data of a plurality of pieces of sensory information sensed in advance, a sensory information determination unit configured to determine second sensory information relevant to first sensory information on the basis of the relevance data, and a presentation control unit configured to control presentation of presentation data associated with the second sensory information to a user.Type: GrantFiled: August 3, 2018Date of Patent: November 9, 2021Assignee: SONY CORPORATIONInventor: Yufeng Jin
-
Patent number: 11164769Abstract: A substrate transport apparatus includes a transport chamber, a drive section, a robot arm, an imaging system with a camera mounted through a mounting interface of the drive section in a predetermined location with respect to the transport chamber and disposed to image part of the arm, and a controller connected to the imaging system and configured to image, with the camera, the arm moving to or in the predetermined location, the controller effecting capture of a first image of the arm on registry of the arm proximate to or in the predetermined location, the controller is configured to calculate a positional variance of the arm from comparison of the first image with a calibration image of the arm, and determine a motion compensation factor changing an extended position of the arm. Each camera effecting capture of the first image is disposed inside the perimeter of the mounting interface.Type: GrantFiled: July 29, 2020Date of Patent: November 2, 2021Assignee: Brooks Automation, Inc.Inventor: Jairo Terra Moura
-
Patent number: 11164038Abstract: Systems and methods are provided for generating sets of candidates comprising images and places within a threshold geographic proximity based on geographic information associated with each of the plurality of images and geographic information associated with each place. For each set of candidates, the systems and methods generate a similarity score based on a similarity between text extracted from each image and a place name, and the geographic information associated with each image and each place. For each place with an associated image as a potential match, the systems and methods generate a name similarity score based on matching the extracted text of the image to the place name, and store an image as place data associated with a place based on determining that the name similarity score for the extracted text associated with the image is higher than a second predetermined threshold.Type: GrantFiled: August 9, 2019Date of Patent: November 2, 2021Assignee: Uber Technologies, Inc.Inventors: Jeremy Hintz, Lionel Gueguen, Kapil Gupta, Benjamin James Kadlec, Susmit Biswas
-
Patent number: 11164049Abstract: Automated method and device suitable for ensuring the dynamic perceptual invariance of an event with a view to extracting therefrom unified semantic representations are provided. The event is perceived by a linguistic data translator that delivers a signal (HD) referenced (x,y), which signal is transformed into a signal (MAP1) referenced (i,j) through a unit (Dec) that carries out a Gaussian filtering operation that is parameterized by w and decimated by a coefficient k, and transformed into a signal (MAP2) referenced (X,Y) representative of the invariant event through a unit (ROI).Type: GrantFiled: April 27, 2018Date of Patent: November 2, 2021Assignee: ANOTHER BRAINInventor: Patrick Pirim
-
Patent number: 11151688Abstract: An image processing method for a screen inner hole of a display device is provided, including steps as follows: determining coordinates at a center and a radius r of the screen inner hole, and drawing a circle with the radius r to obtain a pixel range of an inner hole area, and calculating a pixel variance of the pixel range; analyzing an image configuration in the inner hole area and a peripheral area around the inner hole area, and determining whether key information is in the inner hole area; locating and determining a range of the inner hole area and a range of the peripheral area; and separately calculating, in the inner hole area and in the peripheral area, a pixel mean and a pixel variance.Type: GrantFiled: April 19, 2019Date of Patent: October 19, 2021Assignee: WUHAN CHINA STAR OPTOELECTRONICS TECHNOLOGY CO., LTD.Inventor: Chuan Shuai
-
Patent number: 11145083Abstract: A method for image-based localization includes, at a camera device, capturing a plurality of images of a real-world environment. A first set of image features are detected in a first image of the plurality of images. Before additional sets of image features are detected in other images of the plurality, the first set of image features is transmitted to a remote device configured to estimate a pose of the camera device based on image features detected in the plurality of images. As the additional sets of image features are detected in the other images of the plurality, the additional sets of image features are transmitted to the remote device. An estimated pose of the camera device is received from the remote device.Type: GrantFiled: May 21, 2019Date of Patent: October 12, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Johannes Lutz Schonberger, Marc Andre Leon Pollefeys
-
Patent number: 11124034Abstract: A height adjustment module includes a mounting part extending on a plane; a plurality of support arms disposed around the mounting part on the plane on which the mounting part has been extended, and coupled to the mounting part to be rotatable upward or downward; a plurality of travelling parts each being coupled to a first end portion of the support arm, respectively, and including a wheel contacting the ground, respectively; a lift extending over the plane, and connected to second end portions of the plurality of support arms so that vertical movement is interlocked with each other; and a support link coupled to the mounting part to be vertically slidable, and coupled to the lift to be vertically fixed to move the mounting part relative to the lift as the support link is slid in the mounting part.Type: GrantFiled: September 16, 2019Date of Patent: September 21, 2021Assignees: Hyundai Motor Company, Kia Motors CorporationInventors: Dong Han Koo, Byeong Cheol Lee, Seok Won Lee, Ji A Lee
-
Patent number: 11127164Abstract: A controller for executing a task based on probabilistic image-based landmark localization, uses a neural network, which is trained to process images of objects of a type having a structured set of landmarks to produce a parametric probability distribution defined by values of parameters for a location of each landmark in each processed image. The controller submits the set of input images to the neural network to produce the values of the parameters that define the parametric probability distribution over the location of each landmark in the structured set of landmarks of each input image. Further, the controller determines, for each input image, a global landmark uncertainty for the image based on the parametric probability distributions of landmarks in the input image and executes the task based on the parametric probability distributions of landmarks in each input image and the global landmark uncertainty of each input image.Type: GrantFiled: October 4, 2019Date of Patent: September 21, 2021Assignee: Mitsubishi Electric Research Laboratories, Inc.Inventors: Tim Marks, Abhinav Kumar, Wenxuan Mou, Chen Feng, Xiaoming Liu
-
Patent number: 11122314Abstract: Signals of an immersive multimedia item are jointly considered for optimizing the quality of experience for the immersive multimedia item. During encoding, portions of available bitrate are allocated to the signals (e.g., a video signal and an audio signal) according to the overall contribution of those signals to the immersive experience for the immersive multimedia item. For example, in the spatial dimension, multimedia signals are processed to determine spatial regions of the immersive multimedia item to render using greater bitrate allocations, such as based on locations of audio content of interest, video content of interest, or both. In another example, in the temporal dimension, multimedia signals are processed in time intervals to adjust allocations of bitrate between the signals based on the relative importance of such signals during those time intervals. Other techniques for bitrate optimizations for immersive multimedia streaming are also described herein.Type: GrantFiled: December 12, 2017Date of Patent: September 14, 2021Assignee: GOOGLE LLCInventors: Neil Birkbeck, Balineedu Adsumilli, Damien Kelly
-
Patent number: 11106932Abstract: The disclosure discloses a method for extracting a boundary of a thin-walled part with small curvature based on three-dimensional point cloud. The method includes: collecting point cloud data of a part to reduce density of the point cloud data, performing Euclidean cluster to divide into point cloud pieces, obtaining triangular mesh surfaces for each point cloud triangulation; extracting a boundary vertex of each triangular mesh surface to obtain a contour thereof, selecting a contour of the part among all contours; searching with each point on the contour as a center to form a three-dimensional boundary point cloud band; projecting the three-dimensional boundary point cloud band to a plane, orderly extracting two-dimensional boundary points within the plane, and arranging corresponding points in the three-dimensional boundary point cloud band according to an order of ordered boundary points within the plane to obtain ordered boundary points in the three-dimensional boundary point cloud band.Type: GrantFiled: June 13, 2020Date of Patent: August 31, 2021Assignee: HUAZHONG UNIVERSITY OF SCIENCE AND TECHNOLOGYInventors: Wenlong Li, Cheng Jiang, Gang Wang, Zelong Peng, Han Ding
-
Patent number: 11094082Abstract: A plurality of verification position/orientation candidates for a target object is set. A common structure model including a geometric feature of a part, among geometric features of a reference model representing a three-dimensional shape of the target object, that is common among the candidates is generated. An image including the target object is obtained. A position/orientation of the target object is estimated by verifying the common structure model and the reference model arranged at the plurality of verification position/orientation candidates, against the image.Type: GrantFiled: August 8, 2019Date of Patent: August 17, 2021Assignee: CANON KABUSHIKI KAISHAInventors: Fukashi Yamazaki, Daisuke Kotake
-
Patent number: 11091264Abstract: An automated commissioning and floorplan configuration (CAFC) device can include a CAFC system having a transceiver and a CAFC engine, where the transceiver communicates with at least one device disposed in a volume of space, where the CAFC engine, based on communication between the transceiver and the at least one device, commissions the at least one device.Type: GrantFiled: April 29, 2020Date of Patent: August 17, 2021Assignee: SIGNIFY HOLDING B.V.Inventors: Jonathan Andrew Whitten, Michael Alan Lunn
-
Patent number: 11072067Abstract: Robots and robotic systems and methods can employ artificial neural networks (ANNs) to significantly improve performance. The ANNs can operate alternatingly in forward and backward directions in interleaved fashion. The ANNs can employ visible units and hidden units. Various objective functions can be optimized. Robots and robotic systems and methods can execute applications including a plurality of agents in a distributed system, for instance with a number of hosts executing respective agents, at least some of the agents in communications with one another. The hosts can execute agents in response to occurrence of defined events or trigger expressions, and can operate with a maximum latency guarantee and/or data quality guarantee.Type: GrantFiled: November 16, 2016Date of Patent: July 27, 2021Assignee: KINDRED SYSTEMS INC.Inventor: James Sterling Bergstra
-
Patent number: 11070713Abstract: Techniques are described for controlling the process of capturing three-dimensional (3D) video content. For example a controller can provide centralized control over the various components that participate in the capture, and processing, of the 3D video content. For example, the controller can establish connections with a number of components (e.g., running on other computing devices). The controller can receive state update messages from the components (e.g., comprising state change information, network address information, etc.). The controller can also broadcast messages to the components. For example, the controller can broadcast system state messages to the components where the system state messages comprise current state information of the components. The controller can also broadcast other types of messages, such as start messages that instruct the components to enter a start state.Type: GrantFiled: February 11, 2020Date of Patent: July 20, 2021Assignee: Microsoft Technology Licensing, LLCInventor: Spencer G Fowers
-
Patent number: 11058857Abstract: The method for automatically producing precise tattoo markings on any anatomical body portion automatically by providing a controlled articulated arm carrying a tattoo machine implement. The method also provides a multi-axis positioning platform for supporting and positioning a person, receiving a tattoo, in a prime, optimal, and comfortable position. Also, the method provides choosing, with a selector, a tattoo of choice from any data source of images, as well as applying, rectifying, and mapping, with a physical or virtual design projection and visualization media, the chosen tattoo to the person. The method completes a tattoo using the articulated arm with the tattoo machine implement producing a precise, accurate, and aesthetically pleasing tattoo, automatically.Type: GrantFiled: November 20, 2017Date of Patent: July 13, 2021Assignee: Set Point Solutions, LLCInventor: Joseph Harrington Matanane Brown
-
Patent number: 11055562Abstract: In an example, a system for registering a three-dimensional (3D) pose of a workpiece relative to a robotic device is disclosed. The system comprises the robotic device, where the robotic device comprises one or more mounted lasers. The system also comprises one or more sensors configured to detect laser returns from laser rays projected from the one or more mounted lasers and reflected by the workpiece. The system also comprises a processor configured to receive a tessellation of the workpiece, wherein the tessellation comprises a 3D representation of the workpiece made up of cells, convert the laser returns into a 3D point cloud in a robot frame, based on the 3D point cloud, filter visible cells of the tessellation of the workpiece to form a tessellation included set, and solve for the 3D pose of the workpiece relative to the robotic device based on the tessellation included set.Type: GrantFiled: January 2, 2020Date of Patent: July 6, 2021Assignee: The Boeing CompanyInventors: Phillip Haeusler, Alexandre Desbiez
-
Patent number: 11040441Abstract: A robot in a location interacts with a user. The robot includes a camera, an image recognition processor, a microphone and a loudspeaker, a voice assistant, and a wireless transceiver. The robot moves around and creates a model of the location, and recognizes changes. It recognizes objects of interest, beings, and situations. The robot monitors the user and recognizes body language and gesture commands, as well as voice commands. The robot communicates with the user, the TV, and other devices. It may move around to monitor for regular and non-regular situations. It anticipates user commands based on a situation. It determines if a situation is desired, and mitigates the situation if undesired. It can seek immediate help for the user in an emergency. It can capture, record, categorize and document events as they happen. It can categorize and document objects in the location.Type: GrantFiled: September 20, 2018Date of Patent: June 22, 2021Assignee: Sony Group CorporationInventors: David Young, Lindsay Miller, Lobrenzo Wingo, Marvin DeMerchant
-
Patent number: 11032166Abstract: Systems, methods and articles of manufacture that handle secondary robot commands in robot swarms may operate by receiving, at a receiving device in a swarm of devices, a packet included in a signal broadcast within an environment from a transmitting device in the swarm of devices; parsing the packet for a command associated with a primary effect and a secondary effect; in response to determining that the receiving device is paired with the transmitting device, implementing, by the receiving device, the primary effect; and in response to determining that the receiving device is not paired with the transmitting device, implementing, by the receiving device, the secondary effect.Type: GrantFiled: July 27, 2018Date of Patent: June 8, 2021Assignee: Disney Enterprises, Inc.Inventors: Nathan D. Nocon, Michael P. Goslin, Janice K. Rosenthal, Corey D. Drake
-
Patent number: 11022980Abstract: Provided are communication relationship establishing method and device, computer readable storage medium, electronic device, and cleaning device.Type: GrantFiled: January 23, 2018Date of Patent: June 1, 2021Assignee: Shenzhen 3irobotix Co., Ltd.Inventors: Yong Yang, Zexiao Wu, Yuhui Song
-
Patent number: 11014243Abstract: A system and method of instructing a device is disclosed. The system includes a signal source for providing at least one visual signal where the at least one visual signal is substantially indicative of at least one activity to be performed by the device. A visual signal capturing element captures the at least one visual signal and communicates the at least one visual signal to the device where the device interprets the at least one visual signal and performs the activity autonomously and without requiring any additional signals or other information from the signal source.Type: GrantFiled: June 14, 2018Date of Patent: May 25, 2021Assignee: VECNA ROBOTICS, INC.Inventor: Neal Checka
-
Patent number: 11001444Abstract: An automated storage and retrieval system including at least one autonomous rover for transferring payload within the system and including a communicator, a multilevel storage structure, each level allowing traversal of the at least one autonomous rover, at least one registration station disposed at predetermined locations on each level and being configured to communicate with the communicator to at least receive rover identification information, and a controller in communication with the at least one registration station and configured to receive the at least rover identification information and at least one of register the at least one autonomous rover as being on a level corresponding to a respective one of the at least one registration station or deregister the at least one autonomous rover from the system, where the controller effects induction of the at least one autonomous rover into a predetermined rover space on the level.Type: GrantFiled: March 5, 2019Date of Patent: May 11, 2021Assignee: Symbotic LLCInventors: Forrest Buzan, Edward A. MacDonald, Taylor A. Apgar, Thomas A. Schaefer, Melanie Ziegler, Russell G. Barbour
-
Patent number: 10997729Abstract: In one embodiment, a method, apparatus, and system may predict behavior of environmental objects using machine learning at an autonomous driving vehicle (ADV). A data processing architecture comprising at least a first neural network and a second neural network is generated, the first and the second neural networks having been trained with a training data set. Behavior of one or more objects in the ADV's environment is predicted using the data processing architecture comprising the trained neural networks. Driving signals are generated based at least in part on the predicted behavior of the one or more objects in the ADV's environment to control operations of the ADV.Type: GrantFiled: November 30, 2018Date of Patent: May 4, 2021Assignee: BAIDU USA LLCInventors: Liangliang Zhang, Hongyi Sun, Dong Li, Jiangtao Hu, Jinghao Miao
-
Patent number: 10997744Abstract: The present invention relates to localization method and system for providing augmented reality in mobile devices and includes sub-sampling image data acquired from a camera in the mobile devices, and extracting image patch including line and point in low-resolution image data, matching feature pairs of point features between the image patch and previous image patch according to movement of the camera, and producing line of subpixel for the image patch, and estimating a location of the camera in the mobile devices based on difference between the produced line and estimated line by inertia.Type: GrantFiled: April 3, 2019Date of Patent: May 4, 2021Assignee: Korea Advanced Institute of Science and TechnologyInventors: Hyeon Myeong, Kwang Yik Jung, Pillip Youn, Yeeun Kim, HyunJun Lim, Seungwon Song
-
Patent number: 10984547Abstract: Various embodiments provide systems, methods, devices, and instructions for performing simultaneous localization and mapping (SLAM) that involve initializing a SLAM process using images from as few as two different poses of a camera within a physical environment. Some embodiments may achieve this by disregarding errors in matching corresponding features depicted in image frames captured by an image sensor of a mobile computing device, and by updating the SLAM process in a way that causes the minimization process to converge to global minima rather than fall into a local minimum.Type: GrantFiled: August 5, 2019Date of Patent: April 20, 2021Assignee: Snap Inc.Inventors: David Ben Ezra, Eyal Zak, Ozi Egri
-
Patent number: 10977775Abstract: A depth decoding system and a method for rectifying a ground-truth image are introduced. The depth decoder system includes a projector, a camera, a processor and a decoder. The projector is configured to project a structural light pattern to a first reference plane and a second reference plane. The camera is configured to capture a first ground-truth image from the first reference plane and capture a second ground-truth image from the second reference plane. The processor is configured to perform a rectification operation to the first ground-truth image and the second ground-truth image to generate a rectified ground-truth image. The decoder is configured to generate a depth result according to the rectified ground-truth image.Type: GrantFiled: July 7, 2019Date of Patent: April 13, 2021Assignee: HIMAX TECHNOLOGIES LIMITEDInventors: Chin-Jung Tsai, Yu-Hsuan Chu, Cheng-Hung Chi, Ming-Shu Hsiao, Nai-Ting Chang, Yi-Nung Liu
-
Patent number: 10974391Abstract: Apparatus and methods for carpet drift estimation are disclosed. In certain implementations, a robotic device includes an actuator system to move the body across a surface. A first set of sensors can sense an actuation characteristic of the actuator system. For example, the first set of sensors can include odometry sensors for sensing wheel rotations of the actuator system. A second set of sensors can sense a motion characteristic of the body. The first set of sensors may be a different type of sensor than the second set of sensors. A controller can estimate carpet drift based at least on the actuation characteristic sensed by the first set of sensors and the motion characteristic sensed by the second set of sensors.Type: GrantFiled: April 10, 2018Date of Patent: April 13, 2021Assignee: iRobot CorporationInventors: Dhiraj Goel, Ethan Eade, Philip Fong, Mario E. Munich
-
Patent number: 10970877Abstract: An image processing apparatus, an image processing method, and a program that permit camera calibration with high accuracy by using a known object in images captured by a plurality of imaging sections. An estimation section estimates a 3D position of a road sign included in each of images captured by a plurality of cameras with respect to each of the imaging sections. A recognition section recognizes a positional relationship between the plurality of cameras on the basis of the 3D position of the road sign with respect to each of the cameras estimated by the estimation section. The positional relationship between the plurality of cameras recognized by the recognition section is used to correct the images captured by the plurality of cameras.Type: GrantFiled: September 16, 2016Date of Patent: April 6, 2021Assignee: Sony CorporationInventors: Masashi Eshima, Akihiko Kaino, Takaaki Kato, Shingo Tsurumi
-
Patent number: 10962487Abstract: A flaw detecting apparatus and a method for a plane mirror based on line scanning and ring band stitching are provided. The flaw detecting apparatus comprises: a line scanning detector, an annular illumination source, a rotary table rotatable about a Z axis, a translation table translatable along an X axis and a processor. By translating and rotating the plane mirror to be detected, an entire surface of the plane mirror to be detected can be detected by the line scanning detector, and the flaw of the entire plane mirror to be detected is obtained by a ring band stitching method. The method of line scanning and ring band stitching reduces the imaging distortion, the intermediate data amount, the difficulty in the distortion correction and difficulty in stitching, and improves the detection speed and the detection quality.Type: GrantFiled: November 12, 2019Date of Patent: March 30, 2021Assignee: The Institute of Optics and Electronics, The Chinese Academy of SciencesInventors: Fuchao Xu, Haiyang Quan, Taotao Fu, Xiaochuan Hu, Xi Hou, Sheng Li
-
Patent number: 10940591Abstract: This method is for calibrating a coordinate system of an image capture device and a coordinate system of a robot arm in a robot system that includes a display device, the image capture device, and the robot arm to which one of the display device and the image capture device is fixed, the robot arm having a drive shaft. The method includes: acquiring first captured image data based on first image data; acquiring second captured image data based on second image data different from the first image data; and calibrating the coordinate system of the image capture device and the coordinate system of the robot arm, using the first captured image data and the second captured image data.Type: GrantFiled: July 18, 2018Date of Patent: March 9, 2021Assignee: OMRON CorporationInventor: Norikazu Tonogai
-
Patent number: 10931933Abstract: An operation method of a calibration guidance system includes a feature extraction unit executing a feature extraction operation on a first image group including a first object captured by a multi-camera system to generate a first feature point group corresponding to a predetermined position within an image capture range of the multi-camera system; and a guidance unit determining whether to generate a direction indication to guide the first object to another predetermined position within an image capture range according to a first comparison result between a block corresponding to feature points of the first feature point group of the predetermined position and a predetermined block when a number of the feature points of the first feature point group is greater than a predetermined number.Type: GrantFiled: December 29, 2015Date of Patent: February 23, 2021Assignee: eYs3D Microelectronics, Co.Inventor: Chi-Feng Lee
-
Patent number: 10908606Abstract: A system for autonomously navigating a vehicle along a road segment may include at least one processor. The at least one processor may be programmed to receive from at least one sensor information relating to one or more aspects of the road segment. The processor may also be programmed to determine a local feature of the road segment based on the received information. Further the processor may be programmed to compare the local feature to a predetermined signature feature for the road segment. The processor may be programmed to determine a current location of the vehicle along a predetermined road model trajectory associated with the road segment based on the comparison of the received information and the predetermined signature feature. The processor may also be programmed to determine an autonomous steering action for the vehicle based on a direction of the predetermined road model trajectory at the determined location.Type: GrantFiled: September 22, 2016Date of Patent: February 2, 2021Assignee: Mobileye Vision Technologies Ltd.Inventors: Gideon Stein, Ofer Springer, Andras Ferencz
-
Patent number: 10906176Abstract: A teaching apparatus configured to include a display device and perform a teaching operation for a robot includes a template storage section configured to store a plurality of templates corresponding to a plurality of programs of the robot, a program explanatory content storage section configured to store plural pieces of explanatory content for explaining the respective plurality of programs, a template display section configured to display the plurality of templates stored in the template storage section on the display device, a template selection section configured to select one template from the plurality of templates displayed on the template display section, and a program explanatory content display section configured to read out the explanatory content of the program corresponding to the one template selected by the template selection section from the program explanatory content storage section and configured to display the explanatory content on the display device.Type: GrantFiled: November 6, 2018Date of Patent: February 2, 2021Assignee: Fanuc CorporationInventors: Yuusuke Kurihara, Tomoyuki Yamamoto
-
Patent number: 10885661Abstract: Disclosed are systems and methods for determining a location of a customer within a store. The systems and methods may include receiving at least one image of an item located in the store. The item may be held by the customer. The systems and methods may also include creating a feature vector. The feature vector may store features of the at least one image of the item. The location of the customer may be determined using features stored in the feature vector.Type: GrantFiled: December 15, 2018Date of Patent: January 5, 2021Assignee: NCR CorporationInventors: Pavani Lanka, Samak Radha
-
Patent number: 10875187Abstract: A robotic arm mounted camera system allows an end-user to begin using the camera for object recognition without involving a robotics specialist. Automated object model calibration is performed under conditions of variable robotic arm pose dependent feature recognition of an object. The user can then teach the system to perform tasks on the object using the calibrated model. The camera's body can have parallel top and bottom sides and adapted to be fastened to a robotic arm end and to an end effector with its image sensor and optics extending sideways in the body, and it can include an illumination source for lighting a field of view.Type: GrantFiled: May 22, 2018Date of Patent: December 29, 2020Assignee: ROBOTIQ INC.Inventors: Vincent Paquin, Marc-Antoine Lacasse, Yan Drolet-Mihelic, Jean-Philippe Mercier
-
Patent number: 10875186Abstract: A robot system for performing drive control of a robot arm with respect to a target object according to information obtained by a camera, including a robot having a working section, a camera mounted in the vicinity of the working section, and a control device for controlling the driving of the robot while confirming the target object based on image data of the camera, is provided. The control device performs image-capture control, which executes image-capturing of the target object with the camera a plurality of times when moving the working section with respect to the target object according to a predetermined trajectory, and focus control, in which predetermined images within a plurality of images captured by image-capture control are in focus.Type: GrantFiled: September 3, 2015Date of Patent: December 29, 2020Assignee: FUJI CORPORATIONInventors: Yasuhiro Yamashita, Nobuo Oishi, Takayoshi Sakai
-
Patent number: 10863668Abstract: This invention is a configurable ground utility robot GURU having at least the following parts: an all-terrain mobile apparatus; a payload accepting apparatus; an onboard processor; at least one sensor that communicates with said onboard processor; at least one energy beam payload device connectable to the payload accepting apparatus, capable of creating an energy beam having enough power to elevate an internal temperature of a subject when the energy beam is focused on the subject and where the energy beam payload device communicates with the onboard processor and where the ground utility robot also has a computer program that at least performs the following functions: receives and interprets data from the at least one sensor; controls the mobile apparatus; focuses the at least one energy beam on the subject; and controls the beam strength and time duration.Type: GrantFiled: June 29, 2018Date of Patent: December 15, 2020Assignee: DCENTRALIZED SYSTEMS, INC.Inventors: Georgios Chrysanthakopoulos, Adlai Felser
-
Patent number: 10853646Abstract: Methods, apparatus, systems, and computer-readable media are provided for generating spatial affordances for an object, in an environment of a robot, and utilizing the generated spatial affordances in one or more robotics applications directed to the object. Various implementations relate to applying vision data as input to a trained machine learning model, processing the vision data using the trained machine learning model to generate output defining one or more spatial affordances for an object captured by the vision data, and controlling one or more actuators of a robot based on the generated output. Various implementations additionally or alternatively relate to training such a machine learning model.Type: GrantFiled: June 26, 2019Date of Patent: December 1, 2020Assignee: X DEVELOPMENT LLCInventors: Adrian Li, Nicolas Hudson, Aaron Edsinger
-
Patent number: 10832078Abstract: An imaging system for localization and mapping of a scene including static and dynamic objects. A sensor acquires a sequence of frames in motion or stationary. A memory to store a static map of static objects and an object map of each dynamic object in the scene. The static map includes a set of landmarks, and the object map includes a set of landmarks and a set of segments. A localizer registers keypoints of the frame with landmarks in the static map using frame-based registration and to register some segments in the frame with segments in the object map using a segment-based registration. A mapper to update each object map with keypoints forming each segment and keypoints registered with the corresponding object map according to the segment-based registration, and to update the static map with the remaining keypoints in the frame using the keypoints registered with the static map.Type: GrantFiled: August 11, 2017Date of Patent: November 10, 2020Assignee: Mitsubishi Electric Research Laboratories, Inc.Inventors: Esra Cansizoglu, Sergio S Caccamo, Yuichi Taguchi
-
Patent number: 10823576Abstract: Systems and methods for robotic mapping are disclosed. In some exemplary implementations, a robot can travel in an environment. From travelling in the environment, the robot can create a graph comprising a plurality of nodes, wherein each node corresponds to a scan taken by a sensor of the robot at a location in the environment. In some exemplary implementations, the robot can generate a map of the environment from the graph. In some cases, to facilitate map generation, the robot can constrain the graph to start and end at a substantially similar location. The robot can also perform scan matching on extended scan groups, determined from identifying overlap between scans, to further determine the location of features in a map.Type: GrantFiled: March 18, 2019Date of Patent: November 3, 2020Assignee: Brain CorporationInventors: Jaldert Rombouts, Borja Ibarz Gabardos, Jean-Baptiste Passot, Andrew Smith
-
Patent number: 10819883Abstract: According to one or more embodiments, a system of generating a two-dimensional (2D) image of an environment includes a 2D scanner system that includes a measurement device that is mounted to a first body equipment of an operator and one or more processors that are mounted to a second body equipment of the operator. The measurement device includes a light source, an image sensor, and a controller to determine a distance value to one or more object points. The processors generate a 2D submap of the environment in response to an activation signal from the operator and based at least in part on the distance value, each submap generated from a respective point in the environment. Further, the processors generate a 2D image of the environment using multiple 2D submaps.Type: GrantFiled: March 18, 2019Date of Patent: October 27, 2020Assignee: FARO TECHNOLOGIES, INC.Inventors: Oliver Zweigle, Ahmad Ramadneh, Muhammad Umair Tahir, Aleksej Frank, João Santos, Roland Raith
-
Patent number: 10807249Abstract: A robot including an arm, a force sensor attached to a distal end portion of the arm, a support member attached to the force sensor, a tool supported by the support member, a plurality of protruding portions for detecting posture, which protrude from the support member, and a controller which determines a situation where all of the protruding portions are in contact with a work object on which the tool performs a predetermined work based on detected values of the force sensor.Type: GrantFiled: February 8, 2019Date of Patent: October 20, 2020Assignee: FANUC CORPORATIONInventor: Yoshihito Wakebe
-
Patent number: 10776652Abstract: Described herein are systems and methods that use motion-related data combined with image data to improve the speed and the accuracy of detecting visual features by predict the locations of features using the motion-related data. In embodiments, given a set of features in a previous image frame and given a next image frame, localization of the same set of features in the next image frame is attempted. In embodiments, motion-related data is used to compute the relative pose transformation between the two image frames, and the image location of the features may then be transformed to obtain their location prediction in the next frame. Such a process greatly reduces the search space of the features in the next image frame, and thereby accelerates and improves feature detection.Type: GrantFiled: August 13, 2018Date of Patent: September 15, 2020Assignee: Baidu USA LLCInventors: Yingze Bao, Mingyu Chen
-
Patent number: 10748318Abstract: A system and method of generating a two-dimensional (2D) image of an environment is provided. The system includes a 2D scanner having a controller that determines a distance value to at least one of the object points. One or more processors are operably coupled to the 2D scanner, the one or more processors being responsive to nontransitory executable instructions for generating a plurality of 2D submaps of the environment based at least in part on the distance value, each submap generated from a different point in the environment. A map editor is provided that is configured to: select a subset of submaps from the plurality of 2D submaps; and generate the 2D image of the environment using the subset of 2D submaps. The method provides for realigning of 2D submaps to improve the quality of a global 2D map.Type: GrantFiled: September 5, 2019Date of Patent: August 18, 2020Assignee: FARO TECHNOLOGIES, INC.Inventors: João Santos, Ahmad Ramadneh, Aleksej Frank, Oliver Zweigle
-
Patent number: 10719727Abstract: A method for determining at least one property related to at least part of a real environment comprises receiving a first image of a first part of a real environment captured by a first camera, wherein the first camera is a thermal camera and the first image is a thermal image and the first part of the real environment is a first environment part, providing at least one description related to at least one class of real objects, wherein the at least one description includes at least one thermal property related to the at least one class of real objects, receiving a second image of the first environment part and of a second part of the real environment captured by a second camera, wherein the second part of the real environment is a second environment part, providing an image alignment between the first image and the second image, determining, for at least one second image region contained in the second image, at least one second probability according to the image alignment, pixel information of the first imageType: GrantFiled: October 1, 2014Date of Patent: July 21, 2020Assignee: Apple Inc.Inventors: Darko Stanimirovic, Daniel Kurz
-
Patent number: 10710244Abstract: A method and a device for operating a robot are provided. According to an example of the method, information of a first gesture is acquired from a group of gestures of an operator, each gesture from the group of gestures corresponding to an operation instruction from a group of operation instructions. A first operation instruction from the group of operation instructions is obtained based on the acquired information of the first gesture, the first operation corresponding to the first gesture. The first operation instruction is executed.Type: GrantFiled: June 20, 2017Date of Patent: July 14, 2020Assignee: Beijing Airlango Technology Co., Ltd.Inventors: Yinian Mao, Xinmin Liu
-
Patent number: 10682764Abstract: A robotic system includes a robot having an associated workspace; a vision sensor constructed to obtain a 3D image of a robot scene including a workpiece located in the workspace; and a control system communicatively coupled to the vision sensor and to the robot. The control system is configured to execute program instructions to filter the image by segmenting the image into a first image portion containing substantially only a region of interest within the robot scene, and a second image portion containing the balance of the robot scene outside the region of interest; and by storing image data associated with the first image portion. The control system is operative to control movement of the robot to perform work, on the workpiece based on the image data associated with the first image portion.Type: GrantFiled: July 30, 2017Date of Patent: June 16, 2020Assignee: ABB Schweiz AGInventors: Remus Boca, Thomas A. Fuhlbrigge