Abstract: A service robot may be autonomous, with respect to a portion of a customer service task, and coordinated, with respect to another portion of a customer service task. A resource, such as another robot or an agent (human or automated), may monitor or interact with the robot and, in such a combination, perform a customer service task. The robot may be instructed to pause or delay initiation of a robot portion to allow for a resource to become available at a common time that the interaction portion is to be performed to minimize delay and promote better customer service. Should the delay be beyond an acceptable threshold, the robot may engage in a delay task (e.g., slow down, pause, etc.). The delay task may include a social interaction with a human at a service location.
Type:
Grant
Filed:
May 30, 2018
Date of Patent:
September 14, 2021
Assignee:
Avaya Inc.
Inventors:
David Skiba, Valentine C. Matula, George Erhart
Abstract: A system and method that performs iterative foreground detection and multi-object segmentation in an image is disclosed herein. A new background prior is introduced to improve the foreground segmentation results. Three complimentary methods detect and segment foregrounds containing multiple objects. The first method performs an iterative segmentation of the image to pull out the salient objects in the image. In a second method, a higher dimensional embedding of the image graph is used to estimate the saliency score and extract multiple salient objects. A third method uses a metric to automatically pick the number of eigenvectors to consider in an alternative method to iteratively compute the image saliency map. Experimental results show that these methods succeed in accurately extracting multiple foreground objects from an image.
Type:
Grant
Filed:
May 21, 2020
Date of Patent:
September 14, 2021
Assignee:
KODAK ALARIS INC.
Inventors:
Alexander C. Loui, David Kloosterman, Michal Kucer, Nathan Cahill, David Messinger
Abstract: A method of controlling a moving-robot includes creating a plurality of maps having different generation-time information through a plurality of driving processes; choosing any one of the plurality of maps according to a certain map choosing algorithm based on current time information and the generation-time information, and attempting location recognition based on the chosen map.
Abstract: A caption of a multimodal message (e.g., social media post) can be identified as a named entity using an entity recognition system. The entity recognition system can use an attention-based mechanism that emphasis or de-emphasizes each data type (e.g., image, word, character) in the multimodal message based on each datatypes relevance. The output of the attention mechanism can be used to update a recurrent network to identify one or more words in the caption as being a named entity.
Type:
Grant
Filed:
September 7, 2018
Date of Patent:
September 14, 2021
Assignee:
Snap Inc.
Inventors:
Vitor Rocha de Carvalho, Leonardo Ribas Machado das Neves, Seungwhan Moon
Abstract: An electronic device is provided including a processor, a communications interface coupled to the processor, a memory coupled to the processor, and a module saved in the memory. The module configures the processor to receive a first communications packet from a remote device via the communications interface including information useful for estimating a clock offset of the remote device, and determine an upper bound of the clock offset of the remote device with respect to the electronic device based on the information.
Abstract: Methods and systems for decision making in an autonomous vehicle (AV) are described. A vehicle control system may include a control unit, a perception unit, and a behavioral planning unit. The behavioral planning unit may include an intent estimator that receives a first set of perception information from the perception unit. The behavioral planning unit may include a motion predictor that receives the first set of perception information from the perception unit. The behavioral planning unit may include a function approximator that receives a second set of perception information from the perception unit. The second set of perception information is smaller than the first set of perception information. The function approximator determines a prediction, and the control unit uses the prediction to control an operation of the AV.
Abstract: A method for controlling the motion of one or more collaborative robots is described, the collaborative robots being mounted on a fixed or movable base, equipped with one or more terminal members, and with a motion controller, the method including the following iterative steps: —determining the position coordinates of the robots, and the position coordinates of one or more human operators collaborating with the robot; —determining a set of productivity indices associated with relative directions of motion of the terminal member of the robot, the productivity indices being indicative of the speed at which the robot can move in each of the directions without having to slow down or stop because of the presence of the operator; —supplying the controller of the robot with the data of the set of productivity indices associated with the relative directions of motion of the terminal member of the robot, so that the controller can determine the directions of motion of the terminal member of the robot based on the high
Abstract: Systems are configured for performing GPS-based and sensor-based relocalization. During the relocalization, the systems are configured to obtain radio-based positioning data indicating an estimated position of the system within a mapped environment. The systems are also configured to identify, based on the estimated position, a subset of keyframes of a map of the mapped environment, wherein the map of the mapped environment includes a plurality of keyframes captured from a plurality of locations within the mapped environment, and the plurality of keyframes are associated with anchor points identified within the mapped environment. The systems are further configured to perform relocalization within the mapped environment based on the subset of keyframes.
Type:
Grant
Filed:
September 11, 2020
Date of Patent:
September 7, 2021
Assignee:
MICROSOFT TECHNOLOGY LICENSING, LLC
Inventors:
Raymond Kirk Price, Michael Bleyer, Christopher Douglas Edmonds
Abstract: This invention provides a vision system with an exchangeable illumination assembly that allows for increased versatility in the type and configuration of illumination supplied to the system without altering the underlying optics, sensor, vision processor, or the associated housing. The vision system housing includes a front plate that optionally includes a plurality of mounting bases for accepting different types of lenses, and a connector that allows removable interconnection with the illustrative illumination assembly. The illumination assembly includes a cover that is light transmissive. The cover encloses an illumination component that can include a plurality of lighting elements that surround an aperture through which received light rays from the imaged scene pass through to the lens. The arrangement of lighting elements is highly variable and the user can be supplied with an illumination assembly that best suits its needs without need to change the vision system processor, sensor or housing.
Abstract: Techniques for transferring highly dimensional movements to lower dimensional robot movements are described. In an example, a reference motion of a target is used to train a non-linear approximator of a robot to learn how to perform the motion. The robot and the target are associated with a robot model and a target model, respectively. Features related to the positions of the robot joints are input to the non-linear approximator. During the training, a robot joint is simulated, which results in movement of this joint and different directions of a robot link connected thereto. The robot link is mapped to a link of the target model. The directions of the robot link are compared to the direction of the target link to learn the best movement of the robot joint. The training is repeated for the different links and for different phases of the reference motion.
Abstract: A vehicle control device includes a recognizer (130) that recognizes a situation near an own-vehicle, and a driving controller (140, 160) that controls one or both of steering or acceleration/deceleration of the own-vehicle on the basis of a recognition result of the recognizer, wherein the driving controller does not perform determination of an operation mode of control of the acceleration/deceleration when the recognizer has recognized that an occupant is riding in the own-vehicle, and determines the operation mode of control of the acceleration/deceleration on the basis of a state of another vehicle present near the own-vehicle recognized by the recognizer when the recognizer has recognized that no occupant is riding in the own-vehicle.
Abstract: A control method of an automatic working system, which comprises the following steps: a signal generating device generates a boundary signal; the boundary signal flows through the boundary wire to generate an electromagnetic field; a detecting device on an automatic moving device detects the electromagnetic field to generate a detection signal, amplify the detection signal to form a gain signal, compare an feature point of the gain signal with a preset condition, the preset condition comprising: the feature point is lower than an upper threshold value and higher than a lower threshold value, then automatically adjust the gain signal according to a comparing result, such that the feather point of the gain signal formed after adjusting accords with the preset condition, further to process the gain signal, An automatic working system and an automatic moving device also be disclosed.
Abstract: Provided herein is a continuously unmanned multi-phenomenon sensor system and a continuously unmanned multi-phenomenon sensor platform comprising a plurality of continuously unmanned multi-phenomenon sensor systems for maritime monitoring, that is capable of surveying and monitoring rivers, ports, bays, coastal regions, and the high seas to conduct search operations, monitor for dangerous cargoes, prevent drug and migrant smuggling, enforce fisheries laws, monitor environmental conditions and living marine resources, inspect marine infrastructures, provide navigational aids, and to investigate marine accidents.
Type:
Grant
Filed:
November 2, 2018
Date of Patent:
August 31, 2021
Assignee:
ThayerMahan, Inc.
Inventors:
Michael Joseph Connor, Richard Jude Hine
Abstract: A robot joint space point-to-point movement trajectory planning method. Joint space trajectory planning is performed according to the displacement of a robot from a start point to a target point during PTP movement and a limitation condition of a preset movement parameter physical quantity of each axis in a robot control system. An n-dimensional space is constructed by taking each axis of the robot as a vector, wherein n?2, and the movement parameter physical quantity of each axis of the robot is verified according to a vector relationship between the n axes of the robot, so that a trajectory planning curve of each axis of the robot satisfies the limitation condition of the preset movement parameter physical quantity. The method has a small amount of calculations and strong real-time performance, the movement curves are mild, the control time is optimal, and the algorithm execution effect is good.
Type:
Grant
Filed:
December 14, 2017
Date of Patent:
August 31, 2021
Assignee:
NANJING ESTUN ROBOTICS CO., LTD
Inventors:
Riyue Feng, Jihu Wang, Zhengxian Xia, Tingting Pan, Bo Wu, Shuyi Jing
Abstract: Systems and methods for providing geometric interactions via three-dimensional mapping. A method includes determining a plurality of first descriptors for a plurality of key points in a plurality of first images, wherein each first image shows a portion of a 3D environment in which a robotic device and a visual sensor are deployed; generating a 3D map of the 3D environment based on the plurality of key points and the plurality of descriptors; determining a pose of the visual sensor based on at least one second descriptor and the plurality of first descriptors, wherein the second image is captured by the visual sensor; and determining a target action location based on at least one user input made with respect to a display of the second image and the pose of the visual sensor, wherein the target action location is a location within the 3D environment.
Abstract: A control device includes a robot control section that controls a robot including a hand and a force detecting section; and an operation-mode switching section that switches, when storing a position and a posture of the robot, a first mode for moving the robot by the robot control section until an external force applied to the hand satisfies a predetermined condition and a second mode for moving the robot by the robot control section on the basis of an external force applied to a first part included in the robot.
Abstract: A robotic surgical system has a user interface with a control arm that includes a passive axis system for maintaining degrees-of-freedom of a gimbal rotatably supported on the control arm as the gimbal is manipulated during a surgical procedure. The control arm includes a swivel member, a first member, and a second member. The swivel member is rotatable about a first axis. The first member rotatably coupled to the swivel member about a second axis that is orthogonal to the first axis. The second member rotatably coupled to the first member about a third axis that is parallel to the second axis. The gimbal rotatably supported by the second member about a fourth axis that is orthogonal to the third axis. The passive axis system correlating rotation of the swivel member about the first axis with rotation of the gimbal about the fourth axis.
Abstract: Robotic customer service agents are provided such that, when properly authenticated, they are operable to perform a customer service task. A contact center may dispatch a robot, an accessory for a customer-owned robot, or instructions to transform an unconfigured robot, such as a generic robot, into a configured robot operable to perform the task. If the robot, such as the base or entire robot, robot at the service location, an associated user, hardware addition, and/or software addition is authentic, then the robot may be operated in an authenticated mode. If non-authenticated, then the robot may operate in a non-authenticated mode, such as one consisting of one or more tasks or features being disabled. Additionally, authentication may be temporary (e.g., time restricted) or event restricted (e.g., as long as a result stays within a given range, the robot is being observed, etc.).
Type:
Grant
Filed:
March 31, 2016
Date of Patent:
August 17, 2021
Assignee:
Avaya Inc.
Inventors:
George Erhart, David Skiba, Valentine C. Matula
Abstract: Disclosed herein are a wheel assembly and a robot cleaner including a main body and a wheel assembly coupled to the main body to guide movement of the main body. The wheel assembly has a rotation arm including a first end portion rotatably mounted on the main body, a drive wheel rotatably installed on a second end portion of the rotation arm opposite the first end portion, and an elastic member including a first end installed at the main body and a second end vertically moveably installed at the rotation arm opposite the first end, such that a reduction degree of a contact force due to a descent of the drive wheel.
Abstract: A method of operating a surgical tool includes mounting the surgical tool to a tool driver. The surgical tool includes one or more drive cables movable to actuate an end effector, and one or more segments are defined along a portion of at least one of the one or more drive cables and each segment exhibits a usage value. Usage of the drive cables is monitored with a computer system in communication with the tool driver, and the usage value of one or more of the segments is altered based on usage of the surgical tool.
Abstract: A method of robotic collaboration comprises designating a first robot a lead robot and assigning a first task in a task area to the lead robot. Broadcasting a work query in the task area seeks the presence of subordinate robots configured to perform tasks. Receiving a work confirmation signal from a subordinate robot in the task area answers the work query with an affirmation that the subordinate robot is in the task area to perform tasks. Transmitting a task command to the subordinate robot in response to the work confirmation signal comprises a directive to perform the first task. Receiving a task confirmation signal informs the lead robot of the subordinate robot electronic characteristics comprising processing capabilities, transmit signal profile, receive signal profile, and storage device capabilities. Processing confirms whether the subordinate robot can collaborate with the lead robot to do the first task.
Abstract: Provided is a speech processing method using an AI device. The speech processing method using an AI device according to an embodiment of the present invention includes receiving a speech command of a speaker, determining a recipient of the speech command by performing a speech recognition operation on the speech command, checking whether the recipient receives feedback corresponding to the speech command, selecting a second AI device which is closest to the recipient based on pre-stored positional information of a plurality of AI devices in a specific space and positional information of the recipient by obtaining the positional information of the recipient if there is no feedback, and transmitting a notification message notifying the speech command to the second AI device. As a result, the speech command of the speaker can be successfully transmitted to a recipient when the recipient does not receive the speech command of the speaker.
Abstract: The present embodiments relate to operator control in a master-slave robotic system, including, but not limited to, wherein an operator uses a master control input device to guide the position and orientation of a tool that is driven by the robotic system. Embodiments described herein provide devices and methods of increasing precision control for complex motions by, for example, decoupling operator control of rotational and translational movement.
Abstract: A support robot and control system for timely and efficiently providing at least one of a tool, a part, a component, an electrical supply, and an air supply within an assembly line operation. The support robot and system may include a detection unit for detecting user and a motion planning unit for controlling the motion of the robot based on data received from the detection unit. The support robot may further include an input unit for receiving input from a user. The system may estimate work progress of a user via a work progress estimation unit. The work progress estimation unit may determine work production based on at least an output from the detection unit and information obtained from a work progress database. The system may further include a robot location adjustment unit. The robot location adjustment unit may adjust the location of the robot based on information received from the motion planning unit.
Type:
Grant
Filed:
December 5, 2018
Date of Patent:
August 3, 2021
Assignee:
HONDA MOTOR CO., LTD.
Inventors:
Derrick Ian Cobb, Eric C. Baker, Richard Wolfgang Geary, David Bryan Betz
Abstract: A collision sensing device for laser module accommodates a laser module by a shield body, a moving seat and a base, and a scanning space is set between the shield body and the moving seat to provide a scanning environment for laser module. When a shield body is collided, the shield body may drive the moving seat to move backward, and a traction part disposed at a periphery of the moving seat drives one end of a linkage body to move backward. Therefore, when the linkage body moves backward, it touches or presses a sensing part of a sensing element, the sensing element may determine whether the shield body is collided; if it is determined that the cover shield body is collided, a collision information may be transmitted to a control unit to drive a robot to move out of trouble.
Abstract: A robot apparatus includes a grasping section that grasps an object, a recognition section that recognizes a graspable part and a handing-over area part of the object, and a grasp planning section that plans a path of the grasping section for handing over the object to a recipient by the handing-over area part. The robot apparatus further includes a grasp control section that controls grasp operation of the object by the grasping section in accordance with the planned path.
Abstract: An interactive method is provided for an interactive terminal. The method includes obtaining visitor voice data of a visitor; and performing word segmentation on a text obtained by recognizing the visitor voice data, to obtain a feature word set. The method also include determining, according to a topic generation model and the feature word set, a determined topic to which the text belongs; and separately obtaining appearing probabilities corresponding to feature words in the feature word set when the feature words belong to the determined topic. The method further includes selecting a feature word from the feature words of the feature word set according to the appearing probabilities corresponding to the feature words; and obtaining and outputting visitor interactive content corresponding to the selected feature word.
Type:
Grant
Filed:
August 6, 2019
Date of Patent:
July 20, 2021
Assignee:
TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
Abstract: A system and method for shape sensing assistance in a medical procedure includes providing (402) a three-dimensional image of a distributed pathway system. A shape sensing enabled elongated device is introduced (406) into the pathway system. A shape of the elongated device in the pathway system is measured (410). The shape is compared (414) with the three-dimensional image to determine whether a given path has been selected relative to a target.
Type:
Grant
Filed:
March 23, 2012
Date of Patent:
July 20, 2021
Assignee:
KONINKLIJKE PHILIPS N.V.
Inventors:
Tobias Klinder, Robert Manzke, Raymond Chan
Abstract: A functional assembly for an industrial robot includes a safety cover which selectively covers an operative unit or robot end effector. The cover includes a fixed portion and a movable portion which selectively moves relative to the fixed portion. The movable portion selectively moves to cover or uncover a portion of the operative unit for the operative unit to function for its intended purpose. The operative unit can move in two directions orthogonal to the movement of the cover movable portion allowing the operative unit to travel or reach to the full footprint of the cover. The safety cover may include sensors to detect objects in close proximity to the safety cover to slow or stop the robot or functional assembly.
Type:
Grant
Filed:
June 13, 2018
Date of Patent:
July 20, 2021
Assignee:
Coman S.p.A.
Inventors:
Giorgio Becciani, Giovanni Di Stefano, Stefano Arduino
Abstract: A robot system includes a robot, a vision sensor, a controller, and an input unit. The vision sensor configured to measure a feature point and obtain a measured coordinate value. The controller configured to control the robot. The input unit configured to receive an input from a user toward the controller. The controller obtains, via the input unit, setting information data on a determination point which is different from the feature point. The robot system uses a coordinate value of the determination point and the measured coordinate value, and determines whether the robot is taking a target position and orientation.
Type:
Grant
Filed:
January 30, 2018
Date of Patent:
July 13, 2021
Assignee:
CANON KABUSHIKI KAISHA
Inventors:
Hideaki Suzuki, Keita Dan, Naoki Tsukabe
Abstract: According to one embodiment, an equipment monitoring system includes an imager and a processor. For an equipment repeating a first operation, the imager repeatedly acquires a first image of the equipment imaged at a first timing of the first operation. When a new first image is acquired, the processor determines an abnormality of the equipment included in the new first image based on multiple previous first images.
Abstract: A robot configured to navigate a surface, the robot comprising a movement mechanism; a logical map representing data about the surface and associating locations with one or more properties observed during navigation; an initialization module configured to establish an initial pose comprising an initial location and an initial orientation; a region covering module configured to cause the robot to move so as to cover a region; an edge-following module configured to cause the robot to follow unfollowed edges; a control module configured to invoke region covering on a first region defined at least in part based at least part of the initial pose, to invoke region covering on least one additional region, to invoke edge-following, and to invoke region covering cause the mapping module to mark followed edges as followed, and cause a third region covering on regions discovered during edge-following.
Type:
Grant
Filed:
January 18, 2019
Date of Patent:
July 6, 2021
Assignee:
iRobot Corporation
Inventors:
Michael S. Stout, Gabriel Francis Brisson, Enrico Di Bernardo, Paolo Pirjanian, Dhiraj Goel, James Philip Case, Michael Dooley
Abstract: An autonomous mobile apparatus includes a controller, an image acquirer, an inertia measurer, a distance measurer, and a storage. The controller updates environment map information stored in the storage. The controller estimates a height from a reference surface, based on an image obtained by the image acquirer. The inertia measurer detects an amount of fluctuation of the height. The distance measurer detects whether or not the bottom portion of the autonomous mobile apparatus is in contact with an object. The controller, if having detected a change in height equal to or larger than a reference from information obtained from the image acquirer, the inertia measurer, or the distance measurer, stops updating the environment map information or deletes the environment map information.
Abstract: Convenience and usefulness of a tele-existence system are enhanced taking notice of the possibility by collaboration of tele-existence and a head-mounted display apparatus. A movable member is supported for pivotal motion on a housing. In the housing, a driving motor and a transmission member for transmitting rotation of the driving motor to the movable member are provided. A state information acquisition unit acquires facial expression information and/or emotion information of a user who wears a head-mounted display apparatus. A driving controlling unit controls rotation of the driving motor on the basis of the facial expression information and/or the emotion information.
Abstract: A controlling method for an artificial intelligence moving robot according to an aspect of the present disclosure includes: moving based on a map including a plurality of regions; acquiring images from the plurality of regions through an image acquisition unit during the moving; extracting region feature information based on the acquired image; and storing the extracted region feature information in connection with position information when an image is acquired.
Abstract: A robot operation evaluation device includes: an operational state calculator for calculating an operational state of an evaluation region that is a movable region of a robot, based on an operational state of the robot; a shape-feature quantity calculator for calculating a shape-feature quantity depending on an operation direction of the evaluation region corresponding to the operational state calculated; and an evaluation value calculator for calculating an evaluation value representing a risk degree of the operational state of the evaluation region with respect to the operation direction, based on the shape-feature quantity.
Type:
Grant
Filed:
April 5, 2017
Date of Patent:
July 6, 2021
Assignee:
MITSUBISHI ELECTRIC CORPORATION
Inventors:
Ryosuke Kawanishi, Yukiyasu Domae, Toshiyuki Hatta
Abstract: A robot control device includes: a measuring unit to measure a robot control state indicative of a position and a posture of the robot; a work area setting unit to store, for each of work processes, a work area that is defined by work movement of the worker between a start and an end of each of the work processes and includes a space a body of the worker occupies and to set the work area corresponding to the work process currently carried out by the worker based on a signal specifying the work process currently carried out by the worker; and a robot command generator to generate a motion command for the robot based on the work area and the robot control state. The generator varies the command for the robot based on whether the robot is present in the work area.
Abstract: Examples are provided of an entertainment robot featuring an infrared apparatus for use in determining a position of an object. In these examples, a power supply of a plurality of infrared transmitters is varied, wherein a control value used to set the power supply is transmitted by the plurality of infrared transmitters. The examples may be used in a first entertainment robot, for example, to determine a distance and orientation of another entertainment robot using received infrared signals. The entertainment robot may be controlled by a computing device such as a smartphone or tablet. The entertainment robot may comprise a gaming robot to be used in solo or group gaming activities.
Abstract: A method and system for automatically detecting a location is provided. The method includes receiving by a vehicle, data describing a specified geographical area of a collapsed structure. A received control signal enables control of the vehicle such that the vehicle initiates motion and navigates in a specified direction towards the specified geographical area and upon arriving at the specified geographical area, a size and a magnitude of the collapsed structure is determined via sensors of the vehicle. A center location of the collapsed structure is determined and the vehicle hovers above the center location. Geographical coordinates of the location above the center location are transmitted to a search and rescue system and the center location is scanned via ground penetrating radar. Open spaces within the collapsed structure are determined and scanned to locate living entities within the open spaces.
Type:
Grant
Filed:
October 31, 2019
Date of Patent:
June 29, 2021
Assignee:
International Business Machines Corporation
Inventors:
Kelley L. Anders, Jeremy R. Fox, Grant D. Miller
Abstract: A self-propelled robot autonomously travels on a structure having a target plane and performs work on the plane of the structure. The robot includes a robot main body provided with a moving means for autonomous traveling, a control unit that controls movement of the robot main body, and a working unit that performs work on the target plane. The control unit includes an edge detection unit that detects an edge of the target plane, and the edge detection unit includes an outer detection unit located outward from the working unit in the traveling direction of the robot main body and an inner detection unit located closer to the robot main body than the outer detection unit in the traveling direction of the robot main body.
Abstract: Described herein are methods and systems to establish a pre-build relationship in a model that specifies a first parameter for a first feature of a structure and a second parameter for a second feature of the structure. In particular, a computing system may receive data specifying a pre-build relationship that defines a build value of the first parameter in terms of a post-build observed value of the second parameter. During production of the structure, the computing system may determine the post-build observed value of the second parameter and, based on the determined post-build observed value, may determine the build value of the first parameter in accordance with the pre-build relationship. After determining the build value, the computing system may then transmit, to a robotic system, an instruction associated with production of the first feature by the robotic system, with that instruction specifying the determined build value of the first parameter.
Type:
Grant
Filed:
April 24, 2019
Date of Patent:
June 22, 2021
Assignee:
X Development LLC
Inventors:
Eli Reekmans, Marek Michalowski, Michael Beardsworth
Abstract: A robot system includes an automatic transport device, a robot arm that is installed on the automatic transport device, an object recognition sensor that is disposed on the robot arm, an environment recognition sensor, a placement portion that is disposed on the automatic transport device, and a controller, and in which the controller controls the automatic transport device for moving toward a work stand based on a recognition result of the environment recognition sensor and controls the robot arm for transferring a plurality of component kits from the placement portion to the work stand based on a recognition result of the object recognition sensor after controlling the robot arm for taking out the components from a component storage unit based on the recognition result of the object recognition sensor and creating the plurality of component kits on the placement portion.
Abstract: Methods, systems, and devices for managing robot resources are described. A robot receives from an application a request to reserve a particular set of physical resources of the robot. The robot then determines that each of the physical resources in the set are available to the application and, based on the determination, allocates exclusive use of the particular set of resources to the application by (i) generating a token corresponding to the set of resources, (ii) providing the token to the application, and (iii) updating token data that associates the token with the set of resources. The robot then controls access to the particular set of resources such that, while token data indicates that the token is valid, commands from applications that involve the set of resources are only executed when provided with the token corresponding to the allocation of access to the particular set of resources.
Abstract: A surgical system capable of securing a large movable range of a tip end of a surgical instrument even when the surgical instrument is inserted into a narrow region. One example of the surgical system includes: a manipulator; a surgical instrument including a shaft coupled to a tip end portion of the manipulator; a manipulation input portion to which an operator inputs a command regarding a position and posture of the surgical instrument; a control apparatus configured to control an operation of the manipulator based on the command input to the manipulation input portion; and a motion center position setting portion configured to set a desired position as a motion center position of the surgical instrument in the control apparatus, the desired position being located in an inner part under a body surface of the patient.
Abstract: A robot teaching system includes a hand guide unit including a stick for use in a teaching operation of a robot, and a wireless communication unit configured to communicate by radio with a teach pendant; a relative position setting unit configured to set relative position information between the hand guide unit and the robot; and a coordinate calculation unit configured to calculate, based on the relative position information, coordinates having as an origin a flange surface of the robot or a distal end point of a tool attached to the robot, in such a manner as to correspond to an operation direction of the stick.
Abstract: A system determines if a call participant of a call between the call participant and a voice response system is a human or a machine. Responsive to determining that the call participant is a human, an emotional state of the call participant is determined. Environmental information of an environment associated with the call participant is receiving. A receptiveness level of the call participant is determined based upon the emotional state and the environmental information. A message to the call participant is determined based upon the receptiveness level and one or more machine-learning models.
Type:
Grant
Filed:
November 15, 2018
Date of Patent:
June 15, 2021
Assignee:
INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventors:
Aaron K. Baughman, Mauro Marzorati, Gary Francis Diamanti, Sarbajit K. Rakshit
Abstract: Systems and methods for robot assisted personnel routing including a plurality of autonomous robots operating within a navigational space, each robot including a processor and a memory storing instructions that, when executed by the processor, cause the autonomous robot to detect completion of a task operation by a human operator, receive status information corresponding to at least one other robot, the status information including at least one of a location or a wait time associated with the other robot, determine, from the status information, at least one next task recommendation for directing the human operator to a next robot for a next task operation, and render, on a display of the robot, the at least one next task recommendation for viewing by the human operator, the next task recommendation including a location of the next robot corresponding to the next task.
Type:
Grant
Filed:
February 1, 2019
Date of Patent:
June 15, 2021
Assignee:
Locus Robotics Corp.
Inventors:
Michael Charles Johnson, Luis Jaquez, Sean Johnson
Abstract: A vehicle configured to operate in an autonomous mode may operate a sensor to determine an environment of the vehicle. The sensor may be configured to obtain sensor data of a sensed portion of the environment. The sensed portion may be defined by a sensor parameter. Based on the environment of the vehicle, the vehicle may select at least one parameter value for the at least one sensor parameter such that the sensed portion of the environment corresponds to a region of interest. The vehicle may operate the sensor, using the selected at least one parameter value for the at least one sensor parameter, to obtain sensor data of the region of interest, and control the vehicle in the autonomous mode based on the sensor data of the region of interest.
Type:
Grant
Filed:
June 10, 2019
Date of Patent:
June 15, 2021
Assignee:
Waymo LLC
Inventors:
Jiajun Zhu, Christopher Urmson, David I. Ferguson, Nathaniel Fairfield, Dmitri Dolgov
Abstract: A method for detecting an obstacle applicable in an electronic device includes detecting whether at least one object is within a line of sight of an image capturing device. The image capturing device is controlled to capture a first image of the object and the image capturing device is caused to move until a capturing angle for capturing another image of the object is changed. The image capturing device is controlled to capture a second image of the object and a determination is made as to whether the object in the first image is the same as the object in the second image. For such recognized objects, the object is determined to be a non-planar obstacle when the object in the first image is not the same as the object in the second image.
Abstract: A cleaner performing autonomous traveling includes a main body having a suction opening, a cleaning unit provided within the main body and sucking a cleaning target through the suction opening, a driving unit moving the main body, a camera sensor attached to the main body and capturing a first image, an operation sensor sensing information related to movement of the main body, and a controller detecting information related to an obstacle on the basis of at least one of the captured image and the information related to movement and controlling the driving unit on the basis of the detected information related to the obstacle.
Type:
Grant
Filed:
July 29, 2016
Date of Patent:
June 8, 2021
Assignees:
LG ELECTRONICS INC., SEOUL NATIONAL UNIVERSITY R&DB FOUNDATION
Inventors:
Yongmin Shin, Donghoon Yi, Dongil Cho, Taejae Lee