Abstract: A method and device for the control and regulation of actuators of a robot, taking environmental contacts into consideration, wherein the robot comprises at least two parts, which are connected by an articulated joint drivable by an actuator. The method comprises: by way of a sensor system, ascertaining and storing a time-dependent variable, as a function of the time, of one or more external contact forces and/or of one or more external moments on the parts, providing a condition for the variable, classifying the feature vector based on predefined categories, which each indicate a contact type between one of the parts or the articulated joint and an object in a surrounding environment, which are each imparted by corresponding external contact forces and/or external contact moments, to generate a classification result, and open-loop and/or closed-loop control of the actuator as a function of the classification result.
Type:
Grant
Filed:
May 10, 2016
Date of Patent:
July 14, 2020
Assignee:
Cavos Bagatelle Verwaltungs GmbH & Co. KG
Abstract: Techniques are provided for discovery and monitoring of an environment using a plurality of robots. A plurality of robots navigate an environment by determining a navigation buffer for each of the robots; and allowing each of the robots to navigate within the environment while maintaining a substantially minimum distance from other robots, wherein the substantially minimum distance corresponds to the navigation buffer, and wherein a size of each of the navigation buffers is reduced over time based on a percentage of the environment that remains to be navigated. The robots can also navigate an environment by obtaining a discretization of the environment to a plurality of discrete regions; and determining a next unvisited discrete region for one of the plurality of robots to explore in the exemplary environment using a breadth-first search. The plurality of discrete regions can be, for example, a plurality of real or virtual tiles.
Type:
Grant
Filed:
January 11, 2017
Date of Patent:
July 7, 2020
Assignee:
Daedalus Blue LLC
Inventors:
Shang Q. Guo, Canturk Isci, Jonathan Lenchner, Maharaj Mukherjee
Abstract: A method includes receiving a torque limit for a motor, monitoring a torque output of the motor, determining an amplitude and a phase of a torque ripple of the torque output, and determining a compensated torque limit for the motor, the compensated torque limit including a first component at the torque limit and a second component at an adjusted torque limit.
Abstract: A robot system includes a robot main body, memory part configured to store information for causing robot main body to perform given operation, as saved operational information, motion controller configured to control operation of robot main body by using saved operational information as automatic operational information for causing robot main body to operate, and an operation correcting device configured to generate, by being operated, manipulating information for correcting operation of robot main body during operation. Motion controller controls robot main body to perform an operation corrected from operation related to automatic operational information in response to a reception of the manipulating information while robot main body is operating by using automatic operational information. Memory part is configured to be storable of corrected operational information for causing robot main body to perform corrected operation as saved operational information, when robot main body performs corrected operation.
Abstract: An approach is provided for intelligent inspection and interaction between a vehicle and a drone. The approach, for example, involves retrieving vehicle specification data for the vehicle. The vehicle specification data identifies one or more sensors of the vehicle, one or more sensor locations on the vehicle corresponding to the one or more sensors, or a combination thereof. The approach also involves configuring the drone device to move from a docked location to the one or more sensor locations on the vehicle based on the vehicle specification data. The approach further involves initiating an inspection function, an interaction function, or a combination thereof between the drone device and the vehicle when the drone device is positioned in proximity to the one or more sensor locations.
Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed to improve robot object recognition. An example apparatus includes a visual object recognizer to obtain a visual identifier associated with a target object, and a recognizable object model generator to generate a model of the target object based on mapping an image of the target object to classifier information corresponding to the visual identifier.
Abstract: A system and method that performs iterative foreground detection and multi-object segmentation in an image is disclosed herein. A new background prior is introduced to improve the foreground segmentation results. Three complimentary methods detect and segment foregrounds containing multiple objects. The first method performs an iterative segmentation of the image to pull out the salient objects in the image. In a second method, a higher dimensional embedding of the image graph is used to estimate the saliency score and extract multiple salient objects. A third method uses a metric to automatically pick the number of eigenvectors to consider in an alternative method to iteratively compute the image saliency map. Experimental results show that these methods succeed in accurately extracting multiple foreground objects from an image.
Type:
Grant
Filed:
December 19, 2017
Date of Patent:
July 7, 2020
Assignee:
KODAK ALARIS INC.
Inventors:
Alexander Loui, David Kloosterman, Michal Kucer, Nathan Cahill, David Messinger
Abstract: A robot system including: a robot and a controller, the controller is configured to conduct: a region generating process that generates a robot inclusion region which includes the robot and the like and whose area increases as a speed of the robot increases, an entry prohibited region near the robot, and a speed limit region along the robot side edge of the entry prohibited region; an entry detecting process that detects whether or not the generated robot inclusion region enters the entry prohibited region or the speed limit region; a speed limiting process that reduces operating speed of the robot if the robot inclusion region enters the speed limit region; and a power cutoff unit that immediately stops the robot if the robot inclusion region enters the entry prohibited region.
Abstract: A moving robot system includes a moving robot which travels in a cleaning area, generates a map of the cleaning area, moves based on the map, and sucks foreign substances; and a terminal which inputs a cleaning command to the moving robot, based on the map, wherein the moving robot determines that the moving robot is in an isolation state, when the moving robot is unable to move due to a surrounding obstacle while traveling in the cleaning area, and sets a virtual wall in an isolation position where isolation is occurred and avoids the isolation position during a next cleaning operation.
Type:
Grant
Filed:
July 12, 2018
Date of Patent:
June 30, 2020
Assignee:
LG ELECTRONICS INC.
Inventors:
Jiwoong Kim, Hanmin Jo, Sunhee Cheon, Minwoo Hong
Abstract: A method for operating a floor processing device that moves automatically within an environment, has a detection system of the floor processing device that detects features of a surface to be cleaned and compares them with reference features of carpets. Upon detection of a carpet, it is determined whether and where the carpet has fringes, and the fringes are aligned in a defined direction relative to the carpet by means of a combing attachment of the floor processing device.
Type:
Grant
Filed:
June 6, 2018
Date of Patent:
June 30, 2020
Assignee:
Vorwerk & Co. Interholding GmbH
Inventors:
David Erkek, Georg Hackert, Gerhard Isenberg, Roman Ortmann, Andreas Schmidt
Abstract: A management device that controls an autonomous cleaner includes a processing circuitry that obtains an electronic floor plan of a predetermined space, the predetermined space being a cleaning range of the autonomous cleaner. The management device estimates a person-present position at which a person is present in the predetermined space, and divides the predetermined space into a plurality of cleaning target areas to be individually cleaned by the autonomous cleaner. The management device further identifies a cleaning target area including the person-present position where the person is estimated to be present, as a person-present cleaning target area. The management device further identifies a cleaning target area around the person-present cleaning target area as a surrounding cleaning target area. The management device determines, among the multiple cleaning target areas, cleaning target areas other than the person-present cleaning target area and the surrounding cleaning target area to be cleanable areas.
Type:
Grant
Filed:
February 15, 2018
Date of Patent:
June 30, 2020
Assignee:
PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
Abstract: The disclosure provides systems and methods for mitigating slip of a robot appendage. In one aspect, a method for mitigating slip of a robot appendage includes (i) receiving an input from one or more sensors, (ii) determining, based on the received input, an appendage position of the robot appendage, (iii) determining a filter position for the robot appendage, (iv) determining a distance between the appendage position and the filter position, (v) determining, based on the distance, a force to apply to the robot appendage, (vi) causing one or more actuators to apply the force to the robot appendage, (vii) determining whether the distance is greater than a threshold distance, and (viii) responsive to determining that the distance is greater than the threshold distance, the control system adjusting the filter position to a position, which is the threshold distance from the appendage position, for use in a next iteration.
Type:
Grant
Filed:
February 21, 2018
Date of Patent:
June 23, 2020
Assignee:
Boston Dynamics, Inc.
Inventors:
Stephen Berard, Alex Yu Khripin, Benjamin Swilling
Abstract: A computing system includes actuator control logic configured to generate and send control signals to an actuator of a mobile machine configured to drive direction and speed movement of a linkage on the mobile machine. The computing system also includes a control map generator system configured to receive sensor signals indicative of the direction and speed movement of the linkage on the mobile machine, and, based on the received sensor signals, generate a control mapping that maps the control signals to the direction and speed movement of the linkage of the mobile machine.
Abstract: The present invention relates to computerized (“smart”) mobile electronic devices and more particularly, to a system and methods of diagnosing and repairing malfunctions in smart mobile electronic devices, including a diagnostic process that utilizes decisions based on Big Data that holds information of multiple devices and offers a “disable components” (i.e., turn-off components) solution in order to overcome the problem without flashing a firmware or doing a factory-reset.
Abstract: An autonomous robot comprises a robot body, a drive configured to propel the robot, a sensor system disposed on the robot body, and a navigation controller circuit in communication with the drive and the sensor system. The sensor system comprises at least one proximity sensor comprising a sensor body, and a first emitter, a second emitter and a receiver housed by the sensor body, wherein the receiver detects objects in a bounded detection volume of the receiver field of view aimed outward and downward beyond a periphery of the robot body. The receiver is disposed above and between the first and second emitters, the emitters having a twice-reshaped emission beams angled upward to intersect the receiver field of view at a fixed range of distances from the periphery of the robot body to define the bounded detection volume.
Type:
Grant
Filed:
February 7, 2018
Date of Patent:
June 9, 2020
Assignee:
iRobot Corporation
Inventors:
Tom Bushman, James Herman, Seth Blitzblau, Nathan J. Deschaine, Andrew Scott Reichel
Abstract: Systems, computer-implemented methods and/or computer program products that facilitate aviation engine inspection are provided. In one embodiment, a computer-implemented method comprises: generating, by a system operatively coupled to a processor, a digital grid and visual layer overlay on a raw video feed from borescope inspections; analyzing, by the system, the video feed and identifying frames that capture information of part damage and defects; and classifying, by the system, type of part defect, determining location of defect and learning the digital grid.
Type:
Grant
Filed:
March 22, 2018
Date of Patent:
June 9, 2020
Assignee:
General Electric Company
Inventors:
Karunamay Pathak, Mohsin Khan, Aditya Bhakta, Rebinth Robin
Abstract: An interference region setting apparatus capable of setting an interference region in a coordinate system of a mobile robot, with an inexpensive configuration and a little effort. The apparatus has: a shape model storage section configured to store a shape, a position, and an orientation of an obstruction present in a work region of the mobile robot as an obstruction shape model, in a reference coordinate system; a position and orientation calculation section configured to analyze an image, captured by the image capturing apparatus, of a shape feature in a fixed position within the work region, and calculate a position and orientation of the reference coordinate system represented in a robot coordinate system; and an interference region setting section configured to set an interference region based on the position and orientation of the reference coordinate system converted into the robot coordinate system and the stored obstruction shape model.
Abstract: Aspects of the present invention disclose a method, computer program product, and system for identifying a robotic. The method includes receiving an authentication request for an unknown robotic device asserting to be a first robotic device. The method further includes receiving a first identification dataset for the first robotic device. The method further includes issuing an identification action to the unknown robotic device. The method further includes generating a second identification dataset for the unknown robotic device based upon a response to the identification action received from the unknown robotic device. The method further includes in response to determining the first identification dataset matches the second identification dataset, determining that the unknown robotic device is the first robotic device. The method further includes authenticating the unknown robotic device in response to determining that the unknown robotic device is the first robotic device.
Type:
Grant
Filed:
June 14, 2018
Date of Patent:
June 9, 2020
Assignee:
International Business Machines Corporation
Inventors:
Todd R. Whitman, Aaron K. Baughman, David Bastian, Nicholas McCrory
Abstract: A method of controlling flight of an unmanned aerial vehicle (UAV) includes collecting, while the UAV traverses a flight path, a first set of images corresponding to different fields of view of an environment around the UAV using multiple image capture devices. Each of the multiple image capture devices has one of the different fields of view. The method further includes extracting a first set of feature points from the first set of images, controlling the UAV to traverse a return path, and while the UAV traverses the return path, collecting a second set of images corresponding to the different fields of view using the multiple image capture devices and comparing the first set of feature points with a second set of feature points extracted from the second set of images.
Abstract: A machine learning device of a controller observes, as state variables expressing a current state of an environment, teaching position compensation amount data indicating a compensation amount of a teaching position in control of a robot according to the teaching position and data indicating a disturbance value of each of the motors of the robot in the control of the robot, and acquires determination data indicating an appropriateness determination result of the disturbance value of each of the motors of the robot in the control of the robot. Then, the machine learning device learns the compensation amount of the teaching position of the robot in association with the motor disturbance value data by using the observed state variables and the determination data.
Abstract: A robot arm apparatus according to the present disclosure includes: one or a plurality of a joint unit that joins a plurality of links constituting a multi-link structure; an acquisition unit that acquires an on-screen enlargement factor of a subject imaged by an imaging unit attached to the multi-link structure; and a driving control unit that controls driving of the joint unit based on a state of the joint unit and the enlargement factor.
Type:
Grant
Filed:
May 31, 2019
Date of Patent:
June 2, 2020
Assignees:
SONY CORPORATION, SONY OLYMPUS MEDICAL SOLUTIONS INC.
Abstract: A method of controlling a machine through an interrupted operation of the machine is provided. The method includes retrieving an existing load signal indicative of an existing external load on the machine by a controller prior to the interrupted operation. The method includes calculating an available load capacity for the machine defined as a difference between a maximum load capacity of the machine and the existing external load by the controller. Further, the method includes controlling the machine to execute a working operation by the controller corresponding to the available load capacity of the machine.
Abstract: A master interface device for remotely controlling at least one instrument mounted in an endoscope. The device includes at least one control handle device in the form of a subassembly able to be manipulated in the same degrees of freedom as the associated control instrument. The control device includes on the one hand a gripping and manoeuvring shaft, designed for gripping with the whole hand by the operator, and on the other hand a mounting bracket supporting said shaft. Translation and pivoting of the bracket controls respectively translation and rotation around itself of the associated instrument, and the pivoting of the shaft relative to the bracket controls the bending of the end of the instrument concerned.
Type:
Grant
Filed:
January 22, 2015
Date of Patent:
May 26, 2020
Assignees:
Universite De Strasbourg, Centre National De La Recherche Scientifique, Institut De Recherche Sur Les Cancers De L'Appare-
Inventors:
Michel De Mathelin, Florent Le Bastard, Florent Nageotte, Philippe Zanne, Lucile Zorn
Abstract: A cleaning robot includes a data acquisition unit that acquires actual sensor data by measuring a distance from a current position to an object to be measured; a local map acquisition unit that acquires a local map by scanning the vicinity of the current position based on an environmental map stored in advance; and a processor that determines coordinates of the current position for the local map by performing matching between the local map and the actual sensor data, and determines a traveling direction based on the current position by calculating a main segment angle of a line segment existing in the local map.
Type:
Grant
Filed:
July 6, 2016
Date of Patent:
May 26, 2020
Assignee:
SAMSUNG ELECTRONICS CO., LTD.
Inventors:
Soon Yong Park, No San Kwak, Kyung Shik Roh, Suk June Yoon, So Hee Lee, Min Yong Choi
Abstract: An augmentation module is described for an automated guided vehicle (AGV) deployed in a facility and including a control module for controlling a drive mechanism based on navigational data received from a navigation sensor. The module includes a inter-module communications interface connected to the control module; a memory; and a processor connected to the communications interface and the memory. The processor is configured to: obtain an operational command; generate control data to execute the operational command; convert the control data to simulated sensor data; and send the simulated sensor data to the control module.
Type:
Grant
Filed:
February 10, 2017
Date of Patent:
May 26, 2020
Assignee:
CLEARPATH ROBOTICS INC.
Inventors:
Ryan Christopher Gariepy, Andrew Dobson, Jesse Tebbs, Robert Dam, Roydyn Clayton
Abstract: A non-transitory storage medium storing an image transmission program that is executed on an in-vehicle device mounted in a vehicle having an imaging unit configured to image surroundings of the vehicle to acquire an image and a communication unit configured to perform communication with an information center, the image transmission program includes: an image acquisition step of causing the imaging unit to acquire an image relating to an intersection based on whether or not a mark target facing the intersection is prominent; and a transmission step of causing the communication unit to transmit the image acquired to the information center.
Abstract: A cleaning robot includes a top cover, a bottom cover provided below the top cover, traveling parts provided in the bottom cover, a suction module provided in the bottom cover to suck in foreign materials on the ground, a recessed part formed to be recessed inward between the top cover and the bottom cover, and a first sensor located in the recessed part.
Abstract: A system, a computer program product, and method for controlling synthesized speech output on a voice-controlled device. One or more sensors are used to detect whether one person or more than one person is within a first settable distance from the voice-controlled device. Next a determination is made whether the audio input received is recognized as speech. In response to only one person being detected within the settable distance, begin outputting synthesized speech based on the audio input without waiting for an attention word to be recognized and otherwise wait for additional criteria before outputting synthesized speech based on the speech input. The additional criteria includes determining that more than one person is detected and recognizing that the attention word is received before outputting synthesized speech based on the audio input.
Type:
Grant
Filed:
December 26, 2017
Date of Patent:
May 19, 2020
Assignee:
International Business Machines Corporation
Abstract: Systems and techniques for the quickly and efficiently locating survivors in a disaster area are disclosed. In some embodiments, the system comprises a plurality of robots distributed into a disaster area where each robot comprises at least one processor. The at least one processor is configured to receive discover packets broadcast by one or more mobile devices located in the disaster area. Each discover packet comprises information indicating how many other mobile devices in the disaster area are associated with the broadcasting mobile device. The at least one processor is further configured to determine a path along which to move the robot responsive to receiving one or more of the discover packets. The path is determined based at least in part on the received one or more of the discover packets. The at least one processor is further configured to cause the robot to move according to the determined path.
Type:
Grant
Filed:
January 8, 2019
Date of Patent:
May 19, 2020
Assignee:
International Business Machines Corporation
Abstract: A numerical controller capable of reducing labor of an operator's operation in measuring a workpiece controls a machine tool equipped with an imaging device capable of outputting three-dimensional coordinates of a designated position in a captured image and a measuring instrument measuring physical quantity concerning a shape of the installed workpiece.
Abstract: A cleaning robot includes a top cover, a bottom cover formed below the top cover and configured to move by external force, a fixed body provided in the bottom cover, a first opening formed in an upper portion of the bottom cover and a first sensor connected to the fixed body and externally exposed between the top cover and the bottom cover through the first opening.
Type:
Grant
Filed:
January 3, 2018
Date of Patent:
May 12, 2020
Assignee:
LG Electronics Inc.
Inventors:
Hanshin Kim, Kyuchun Choi, Jungmin Shim
Abstract: A crack analysis device includes a captured image acquiring unit, a crack detecting unit, and a crack ratio calculator. The captured image acquiring unit acquires a captured image which is obtained by imaging a road surface. The crack detecting unit detects cracks in the imaged road surface on the basis of the captured image. The crack ratio calculator calculates a crack ratio indicating a ratio of an area of the cracks to a predetermined area on the basis of the detected cracks.
Type:
Grant
Filed:
July 21, 2016
Date of Patent:
May 12, 2020
Assignees:
Kabushiki Kaisha Toshiba, Toshiba Infrastructure Systems & Solutions Corporation
Abstract: A robotic lawnmower (100) for movable operation within a work area (205) has a satellite navigation device (190), a landmark scanner (193) and a controller (110). The controller causes the robotic lawnmower (100) to movably operate within the work area (205) in a first operating mode, the first operating mode being based on positions determined from satellite signals received by the satellite navigation device (190). The controller determines that a position cannot be reliably determined based on satellite signals received by the satellite navigation device (190), and in response thereto causes the robotic lawnmower (100) to movably operate within the work area (205) in a second operating mode. In the second operating mode, the controller receives scanning information from said landmark scanner (193) and identifies at least one landmark based on the received scanning information and determines a landmark-based position estimate.
Abstract: An optical sensor system for determining trajectory of a car, the optical sensor system being mounted in a wheel arch of the car, includes: a plurality of optical sensors mounted in the wheel arch above a wheel, the optical sensors being located behind a plurality of clear casings that do not touch the wheel, for performing a plurality of counts corresponding to respectively capturing a plurality of images of the wheel according to an outer surface of the wheel evenly covered with wheel treads. The captured images are compared with a reference image to determine a 2D displacement of the wheel from its original position. This measured 2D displacement is converted into a distance the wheel travels along a path, and the wheel trajectory is determined by calculating a turning degree of the wheel according to a trigonometric manipulation of the captured 2D displacement.
Type:
Grant
Filed:
September 24, 2018
Date of Patent:
May 5, 2020
Assignee:
PixArt Imaging Inc.
Inventors:
Keen-Hun Leong, Boon-How Kok, Dennis Dominic Donald
Abstract: A light spot indication robot and a light spot indication method thereof are arranged. The light spot indication robot comprises a robot body which is arranged with a control module, a camera module and a laser indication module (100), wherein the laser indication module (100) emits a laser beam, the imaging module shoots to-be-indicated objects to form an image plane, the laser beam and the to-be-indicated object are projected onto the image plane to form a laser spot projection position and to-be-indicated object projection positions, respectively. The light spot indication robot is also arranged with a signal input module. According to content shown on the image plane of the to-be-indicated objects shot by the imaging module and input information of the signal input module, a target object among the to-be-indicated objects is determined.
Abstract: A numerical controller is provided with a speed feedforward gain correction unit configured to obtain an associated axis, which is subject to a varying load applied to a particular axis according to a coordinate value, and a correction coefficient of the particular axis corresponding to the current coordinate value of the associated axis, based on a correction coefficient storage unit, and correct a speed feedforward gain for speed feedforward control of the particular axis, and a motor control unit configured to control the particular axis based on the corrected speed feedforward gain.
Abstract: A service providing system includes a request receiving robot and a service providing robot. The request receiving robot includes a floating unit configured to float in air, a recognition unit configured to recognize a service providing request by a user, and a transmitter configured to transmit the recognized service providing request. The service providing robot includes a receiver configured to receive the service providing request transmitted by the request receiving robot, a moving unit configured to move the service providing robot to the user who makes the service providing request as a destination according to the received service providing request, and a service providing unit configured to provide a service to the user.
Abstract: Techniques for navigating semi-autonomous mobile robots are described. A semi-autonomous mobile robot moves within an environment to complete a task. A navigation server communicates with the robot and provides the robot information. The robot includes a navigation map of the environment, interaction information, and a security level. To complete the task, the robot transmits a route reservation request to the navigation server, the route reservation request including a priority for the task, a timeslot, and a route. The navigation server grants the route reservation if the task priority is higher than the task priorities of conflicting route reservation requests from other robots. As the robot moves within the environment, the robot detects an object and attempts to classify the detected object as belonging to an object category. The robot retrieves an interaction profile for the object, and interacts with the object according to the retrieved interaction profile.
Abstract: Methods, robots, systems, and computer-readable media are provided for selectively uploading operational data generated by a robot to a remote computing system. In various implementations, a robot may classify a plurality of operational data points generated by the robot with a plurality of operational data types. The robot may also identify one or more attributes of a physical communication link between the robot and a remote computing system. Based on the one or more attributes of the physical communication link, the robot may identify a plurality of strategies for uploading operational data from the robot to the remote computing system. Each strategy may govern how operational data points of at least one of the plurality of operational data types is uploaded. The robot may then selectively upload the plurality of classified operational data points to the remote computing system pursuant to the plurality of strategies.
Abstract: A method for controlling a manipulator includes releasing the manipulator in reaction to a release request by an operator, wherein the recognition of the release request involves monitoring the variation over time of a measured value that is characteristic of a state of the manipulator. Increased robustness of the recognition of the release request results.
Abstract: An example virtual presence system includes at least a first and second mobile device in electronic communication with one another, a panoramic camera coupled to the second mobile device. A user of the first mobile device can specify a position (e.g., location or orientation), such as via a joystick or head-mounted display. The specified position is transmitted to the second mobile device, which selectively obtains image data from the panoramic camera corresponding to the specified position. The selected image data is transmitted as a video stream or other format to the first mobile device, thereby allowing the user of the first mobile device to view surrounding environment available at the second mobile device.
Abstract: A motion control method for a robot is disclosed. The robot includes a determining module, a merging module, and a controlling module. The determining module determines whether at least two motion tasks executed in an adjacent sequence satisfy a merging condition. The merging module merges the at least two motion tasks to a new motion task, when the merging condition is satisfied. The controlling module controls the robot to perform the new motion task.
Abstract: In order to make learning efficient for a robot, a robot operation device is provided with operation information input units for generating operation information specifying a state of a robot on the basis of an operation by an operator, a control unit for controlling the robot on the basis of the operation information, a non-operation information collecting unit for collecting non-operation information which is information that relates to the operator and does not affect the state of the robot, an action analysis unit for estimating the state of the operator on the basis of the non-operation information, and an action learning unit for learning the operation of the operator on the basis of the operation information and the estimation result of the action analysis unit.
Abstract: According to one embodiment, a computer-implemented method for remotely controlling a capture of images of a data storage library during operation thereof with a bracket on an accessor includes pairing a remote controller to a wireless image capture device that is coupled to a bracket mounted on an accessor, instructing the wireless image capture device to start recording images of the data storage library during operation thereof, including movement of the accessor, and thereafter, instructing the wireless image capture device to stop recording images of the data storage library during operation thereof.
Type:
Grant
Filed:
August 27, 2018
Date of Patent:
April 14, 2020
Assignee:
International Business Machines Corporation
Abstract: A system for operating a legacy software application is presented. The system includes a distributed processing service. A wrapper software object is configured both to receive processing requests to a legacy software application from outside the distributed processing service and to send the processing requests using the distributed processing service. Additionally, an encapsulated software object includes the legacy software application and an exoskeleton connection service. The exoskeleton connection service is both configured to accept processing requests from the distributed processing service, and mapped to an application programming interface of the legacy software application.
Type:
Grant
Filed:
January 30, 2017
Date of Patent:
April 14, 2020
Assignee:
UST Global (Singapore) Pte. Ltd.
Inventors:
Douglas Wiley Bachelor, Raul Hugo Curbelo, Elizabeth Winters Elkins, Christie Patrick McGrath, Simon Byford Moss, Thomas C. Fountain
Abstract: A robot system is provide with a robot control device that includes an operation control unit and a learning control unit. The learning control unit performs a learning control in which a vibration correction amount for correcting a vibration generated at a control target portion of a robot is calculated and the vibration correction amount is employed in the operation command at a next time. The learning control unit includes a plurality of learning control parts for calculating the vibration correction amount and a selection unit that selects one of the plurality of learning control parts on the basis of operation information of the robot when the robot is made to be operated by an operation program that is a target of the learning control.
Abstract: An interactive mobile robot system and a method for creating an invisible track for a mobile robot. The system and method allow the creation of invisible tracks by guiding objects. They also allow the use of such invisible tracks for the semi-autonomous or autonomous control of toys, including model cars and model trains, or of mobile robots to move them along a real-world path. The system includes a mobile robot, receiver circuitry to receive one or more position signals, and processing circuitry. The processing circuitry is configured to determine position information associated with the mobile robot based on the one or more position signals. The processing circuitry is further configured to create an invisible track based on the position information, to determine a current position of the mobile robot, and to generate control signals based on the current position of the mobile robot and the invisible track.
Type:
Grant
Filed:
August 18, 2015
Date of Patent:
April 7, 2020
Assignee:
Verity Studios AG
Inventors:
Raffaello D'Andrea, Markus Waibel, Mark Mueller, Markus Hehn
Abstract: The present teachings relate generally to a small remote vehicle having rotatable flippers and a weight of less than about 10 pounds and that can climb a conventional-sized stairs. The present teachings also relate to a small remote vehicle can be thrown or dropped fifteen feet onto a hard/inelastic surface without incurring structural damage that may impede its mission. The present teachings further relate to a small remote vehicle having a weight of less than about 10 pounds and a power source supporting missions of at least 6 hours.
Type:
Grant
Filed:
December 19, 2016
Date of Patent:
April 7, 2020
Assignee:
FLIR Detection, Inc.
Inventors:
Pavlo E. Rudakevych, Garran M. Gossage, Christopher L. Morey, Todd M. Meaney, Timothy R. Ohm, Adam Wozniak
Abstract: An inspection device includes: a line-of-sight information acquisition unit that acquires line-of-sight information including a starting point of a line of sight, a line-of-sight direction, and an observation range of an inspector during visual inspection; a target object information acquisition unit that acquires target object information including a position, an attitude, and a shape of the inspection target object during the visual inspection; and a program generation unit that specifies an observation position of the inspection target object observed by the inspector during the visual inspection as an inspection position based on the line-of-sight information and the target object information, captures an image of the specified inspection position of the inspection target object using the inspection imaging device, and generates an inspection execution program for performing inspection of the inspection target object based on the captured image of the inspection position of the inspection target object.