Abstract: Methods and apparatus for operating a robotic vehicle. A sensor is positioned on the robotic vehicle to identify a first marker positioned in a first area within a space defined by a structure. Data is received from the sensor descriptive of one or more characteristics of the marker. A first instruction is identified corresponding to the one or more characteristics. The robotic vehicle is caused to change a physical orientation characteristic of the first marker and move to a second area within the space.
Type:
Grant
Filed:
June 11, 2018
Date of Patent:
November 10, 2020
Assignee:
United Services Automobile Association (USAA)
Abstract: The present disclosure is applicable to robot technology. A method for robot fall prediction, and a robot are provided. The method includes: searching a weighted value of a center of gravity of the robot corresponding to a posture of the robot, according to a preset first corresponding relationship; correcting an offset of the center of gravity of the robot based on the weighted value of the center of gravity of the robot; correcting an acceleration of the robot based on an offset direction of the center of gravity of the robot; and determining whether the robot will fall based on the corrected offset of the center of gravity, the offset direction of the center of gravity, and the corrected acceleration of the robot. The present disclosure improves the real-time performance and accuracy of the prediction for the fall of a robot through the fusion calculation of various data.
Abstract: One method disclosed includes identifying, in a map of markers fixed in an environment, two co-located markers within a threshold distance of each other, where each of the two co-located markers has a non-overlapping visibility region. The method further includes determining a set of detected markers based on sensor data from a robotic device. The method additionally includes identifying, from the set of detected markers, a detected marker proximate to a first marker of the two co-located markers. The method also includes enforcing a visibility constraint based on the non-overlapping visibility region of each of the two co-located markers to determine an association between the detected marker and a second marker of the two co-located markers. The method further includes determining a location of the robotic device in the environment relative to the map based on the determined association.
Abstract: A robot includes a gripping section adapted to grip an object by open and close a pair of finger sections, a moving device adapted to relatively move the object and the gripping section, and a control device adapted to control the moving device to move the gripping section relatively toward the object, and dispose the pair of finger sections in a periphery of the object, and then control the gripping section to open and close the pair of finger sections in a plane parallel to a mounting surface on which the object is mounted, pinch the object between the pair of finger sections from a lateral side of the object, and grip the object with the gripping section at at least three contact points.
Type:
Grant
Filed:
April 3, 2018
Date of Patent:
October 27, 2020
Assignee:
Seiko Epson Corporation
Inventors:
Yukihiro Yamaguchi, Kazuhiro Kosuge, Yasuhisa Hirata, Aya Kaisumi
Abstract: A device that can prevent a decrease in an efficiency of a manufacturing line. The device includes a shape acquisition section for acquiring a shape of a workpiece; a motion pattern acquisition section for acquiring basic motion patterns including a reference workpiece shape, a reference working position in the reference workpiece shape, and a type of an operation carried out on the reference working position; a similarity determination section for determining whether a shape of the workpiece is similar to the reference work piece shape; a position determination section for, based on a shape of the workpiece and the reference workpiece shape, determining the working position on the workpiece that corresponds to the reference working position; and an motion-path generation section for, by changing the reference working position to the determined working position, generating a motion path.
Abstract: A telerobotic surgery system includes a robotic surgery station having a first pair of robot arms, each carrying a laparoscopic tool and each having a robot arm drive. A first controller is connected to each robot arm drive. Harvested animal tissue is at the robotic surgery station. A remote surgeon station includes a second pair of robot arms, each carrying a laparoscopic tool and each having a robot arm drive. A second controller receives data regarding movement of the second pair of robot arms and respective laparoscopic tool based on user manipulation of each laparoscopic tool at the remote surgeon station. A communications network couples the first and second controllers with the second controller operative as a master and the first controller configured to control each robot arm drive and effect one-to-one movement of the first pair of robot arms and carried laparoscopic tools as a slave.
Abstract: A system for detecting and displaying an external force applied to a robot. Magnitude and direction of the detected external force are displayed by an image for visual and intuitive understanding. A robot system includes a robot; a detection section for detecting an external force applied to the robot; a conversion section for converting magnitude and direction of the external force detected by the detection section into a coordinate value of a three-dimensional rectangular coordinate system; an image generating section for generating a force model image representing the magnitude and direction of the external force by a graphic, with use of the coordinate value obtained by the conversion section; and a display section for three-dimensionally displaying the force model image generated by the image generating section.
Abstract: A robot system that includes a robot face with a monitor, a camera, a speaker and a microphone. The robot face is connected to a stand that can be placed in a chair. The stand is configured so that the robot face is at a height that approximates the location of a person's head if they were sitting in the chair. The robot face is coupled to a remote station that can be operated by a user. The face includes a monitor that displays a video image of a user of the remote station. The stand may be coupled to the robot face with articulated joints that can be controlled by the remote station. By way of example, the user at the remote station can cause the face to pan and/or tilt.
Type:
Grant
Filed:
October 19, 2010
Date of Patent:
October 20, 2020
Assignee:
INTOUCH TECHNOLOGIES, INC.
Inventors:
Yulun Wang, Timothy C Wright, Daniel S Sanchez, Marco C Pinter
Abstract: A machine learning system builds and uses computer models for controlling robotic performance of a task. Such computer models may be first trained using feedback on computer simulations of the robot performing the task, and then refined using feedback on real-world trials of the robot performing the task. Some examples of the computer models can be trained to automatically evaluate robotic task performance and provide the feedback. This feedback can be used by a machine learning system, for example an evolution strategies system or reinforcement learning system, to generate and refine the controller.
Type:
Grant
Filed:
December 14, 2017
Date of Patent:
October 13, 2020
Assignee:
Amazon Technologies, Inc.
Inventors:
Brian C. Beckman, Leonardo Ruggiero Bachega, Brandon William Porter, Benjamin Lev Snyder, Michael Vogelsong, Corrinne Yu
Abstract: A method and system of performing interactive object segmentation from streaming surfaces is disclosed. An environment data stream, including correlated image and depth data, is received from a set of sensors collocated with a robot. A virtualized representation of a physical environment is displayed and updated in accordance with the environment data stream in real-time. A marking input is received from a haptic-enabled input device. A position in the virtualized representation of the physical environment is determined in accordance with the marking input and is constrained by a first virtualized surface in the virtualized representation of the physical environment. Object segmentation is performed from the position of the marking input on the correlated image and depth data.
Abstract: A turret system includes a base subassembly and a turret subassembly. The base subassembly includes a base housing and a first turret mounting interface coupled to the base housing. The base subassembly also includes a first antenna configured to wirelessly transmit an electrical power signal. The turret subassembly includes a turret housing and a second turret mounting interface coupled to the turret housing. The second turret mounting interface is configured to rotate with respect to the first turret mounting interface, thereby rotating the turret housing with respect to the base housing. The turret subassembly further comprises a second antenna configured to wirelessly receive the electrical power signal from the first antenna.
Type:
Grant
Filed:
April 4, 2017
Date of Patent:
October 13, 2020
Assignee:
Lockheed Martin Corporation
Inventors:
Thomas E. Byrd, Joshua E. Baer, David L. Hunn
Abstract: A system and method to control execution of a centralized and decentralized controlled plan, has been described. A plan execution engine executing at a cloud node, receives sensor data captured by one or more sensors at a plurality of autonomous robots and a plan execution status of the centralized controlled plan. The plan execution engine executing at the cloud node, determines whether the plurality of autonomous robots satisfy a transition condition. Next a determination is made one or more activated constraints and task allocation for one or more autonomous robots in the next state. Next the plan execution engine executing at the cloud node and autonomous robots collaboratively determine a constraint solution for the activated one or more plan constraints. Finally based on the determined constraint solution, the one or more plan execution engines sends instructions to an actuator for executing the task included in the centralized controlled plan.
Abstract: Disclosed is an apparatus and method to train an autonomous driving model. The apparatus includes a driver information collection processor configured to collect driver information while a vehicle is being driven. The apparatus also includes a sensor information collection processor configured to collect sensor information from a sensor installed in the vehicle while the vehicle is being driven, and a model training processor configured to train the autonomous driving model based on the driver information and the sensor information.
Type:
Grant
Filed:
July 13, 2018
Date of Patent:
October 6, 2020
Assignee:
SAMSUNG ELECTRONICS CO., LTD.
Inventors:
Dong Hwa Lee, Chang Hyun Kim, Eun Soo Shim, Ki Hwan Choi, Hyun Jin Choi
Abstract: Provided is a robot, including: a first actuator; a first sensor; one or more processors communicatively coupled to the first actuator and to the first sensor; and memory storing instructions that when executed by at least some of the one or more processors effectuate operations comprising: determining a first location of the robot in a working environment; obtaining, with the first sensor, first data indicative of an environmental characteristic of the first location; and adjusting a first operational parameter of the first actuator based on the sensed first data, wherein the adjusting is configured to cause the first operational parameter to be in a first adjusted state while the robot is at the first location.
Type:
Grant
Filed:
January 3, 2019
Date of Patent:
October 6, 2020
Assignee:
AI Incorporated
Inventors:
Ali Ebrahimi Afrouzi, Masoud Nasiri, Scott McDonald
Abstract: An industrial process is monitored and controlled by displaying at least one process page in a process page window, providing an operator configurable region, and providing at least one item display element representing at least one process component, sub-process or operation on the process page and being movable on top of the operator configurable region. A movement of the item display element from the process page on to the operator configurable region is determined, and the operator configurable region is caused to display a corresponding docked display element in the operator configurable region. The docked display element is configured to enable control of the process component, sub-process or operation the docked display element represents from the operator configurable region.
Type:
Grant
Filed:
October 17, 2018
Date of Patent:
September 29, 2020
Assignee:
VALMET AUTOMATION OY
Inventors:
Hannu Paunonen, Jouni Ruotsalainen, Lauri Lehtikunnas
Abstract: A simulation device capable of executing a proper simulation without changing a program, while the definition of virtual peripheral equipment and/or a PLC is not necessary. A signal status setting file, which is separated from a robot program, can be executed in parallel with the program, and includes a command for setting or changing a signal status described corresponding to a line in execution of the program, wherein the status is referenced by executing the line of the program. For example, a command of the file, described corresponding to a fifth line of the robot program, commands inputting a signal which indicates that the opening motion of a door is completed. Therefore, when the simulation is executed, in synchronization with the line in execution of the program, the setting or changing of the signal status, described corresponding to the line in execution, is performed.
Abstract: A method for implementing machining tasks for an object. The method identifies location coordinates for a plurality of holes. A task file contains the machining tasks. The robotic devices use the task files to perform the machining tasks. A minimum number of positioning stations is determined where a portion of the machining tasks will be performed by the robotic devices. An ordered sequence for performing the machining tasks is calculated and path a path with the near-minimum distance is determined. Robotic control files are created that cause the robotic devices to perform the machining tasks. The robotic control files are output to the robotic devices to perform the machining tasks to form the plurality of holes.
Type:
Grant
Filed:
December 14, 2016
Date of Patent:
September 22, 2020
Assignee:
The Boeing Company
Inventors:
Michelle Crivella, Philip L. Freeman, Joshua D. Kalin, Robert Stephen Strong, Patrick Joel Michaels
Abstract: Systems, devices and methods are provided in which an instrument can translate along an insertion axis. Rather than relying primarily on a robotic arm for instrument insertion, the instruments described herein have novel instrument based insertion architectures that allow portions of the instruments themselves to translate along an insertion axis. For example, an instrument can comprise a shaft, an end effector on a distal end of the shaft, and a handle coupled to the shaft. The architecture of the instrument allows the shaft to translate relative to the handle along an axis of insertion. The translation of the shaft does not interfere with other functions of the instrument, such as end effector actuation.
Type:
Grant
Filed:
September 30, 2019
Date of Patent:
September 22, 2020
Assignee:
Auris Health, Inc.
Inventors:
Aren Calder Hill, Travis Michael Schuh, Nicholas J. Eyre
Abstract: A method of operating a robot includes driving a robot to approach a reach point, extending a manipulator arm forward of the reach point, and maintaining a drive wheel and a center of mass of the robot rearward of the reach point by moving a counter-balance body relative to an inverted pendulum body while extending the manipulator arm forward of the reach point. The robot includes the inverted pendulum body, the counter-balance body deposed on the inverted pendulum body, the manipulator arm connected to the inverted pendulum body, at least one leg having a first end prismatically coupled to the inverted pendulum body, and the drive wheel rotatably coupled to a second end of the at least one leg.
Type:
Grant
Filed:
February 22, 2018
Date of Patent:
September 22, 2020
Assignee:
Boston Dynamics, Inc.
Inventors:
Kevin Blankespoor, John Aaron Saunders, Steven D. Potter, Vadim Chernyak, Shervin Talebinejad
Abstract: Devices, systems, and methods include a teleoperated system including a kinematic structure having a joint, a drive or brake system for controlling the joint, and a computing unit coupled with the drive or brake system. The computing unit is configured to detect that the joint is between a software defined range of motion limit for the joint and a physical range of motion limit for the joint, the software defined range of motion limit being spaced a distance apart from the physical range of motion limit and delay for a duration of time, in response to detecting the joint between the software defined range of motion limit and the physical range of motion limit, applying the drive or brake system to stop motion of the joint.
Type:
Grant
Filed:
June 22, 2018
Date of Patent:
September 22, 2020
Assignee:
INTUITIVE SURGICAL OPERATIONS, INC.
Inventors:
Paul G. Griffiths, Paul W. Mohr, Brandon D. Itkowitz, Thomas R. Nixon, Roman Devengenzo
Abstract: A medical observation device includes an imaging unit configured to photograph an image of an operation site, and a holding unit configured to be connected with the imaging unit and have rotary shafts which are operable with at least six degrees of freedom. Among the rotary shafts, at least two shafts are active shafts whose driving is controlled based on states of the rotary shafts, and at least one shaft is a passive shaft which is rotated according to direct external manipulation accompanying contact.
Type:
Grant
Filed:
July 23, 2015
Date of Patent:
September 22, 2020
Assignees:
SONY OLYMPUS MEDICAL SOLUTIONS INC., SONY CORPORATION
Abstract: A charging device includes a case, a first connector, a first communication device, a processing device, and a driving device. The first communication device is configured to communicate with a second communication device of an electrical device and send a control signal to the processing device after communicating with the second communication device. The processing device is configured to output a driving signal to the driving device after receiving the control signal. The driving device is configured to provide a pushing force to push the first connector to extend from the case to couple to a second connector of the electrical device after receiving the driving signal. The first connector is further configured to charge the electrical device after couple to the second connector. A charging system is also provided.
Abstract: A machine learning device for learning operations of a robot and a laser scanner, includes a state observation unit observing a state of a tip end of the robot where the laser scanner is mounted and a state of an optical component in the laser scanner as a state data; a determination data obtaining unit receiving at least one of a machining time of the robot where the laser scanner is mounted, a drive current driving the robot, a command path of the laser scanner, a passing time in a processable area where the laser scanner performs processing, and a distance between the robot and a part where the laser scanner performs processing as a determination data; and a learning unit learning operations of the robot and the laser scanner based on an output of the state observation unit and an output of the determination data obtaining unit.
Abstract: A robot setting apparatus includes a grip reference point setting unit, a grip direction setting unit that defines a grip direction in which the end effector model grips the workpiece model, a workpiece side grip location designation unit that designates a grip position at which the end effector model grips the workpiece model in a state in which at least the workpiece model is displayed in an image display region, and a relative position setting unit that sets a relative position between the end effector model and the workpiece model such that the grip direction defined in the grip direction setting unit is orthogonal to a workpiece plane representing an attitude of the workpiece model displayed in the image display region, and the grip reference point is located at the grip position along the grip direction.
Abstract: A system includes an animated character head having one or more processors configured to receive an input, to make an animation selection based on the input, and to provide a first control based on the animation selection. The animated character head also includes a display configured to provide an indication of the animation selection for visualization by a performer operating the animated character head.
Type:
Grant
Filed:
April 13, 2017
Date of Patent:
September 15, 2020
Assignee:
Universal City Studios LLC
Inventors:
Anisha Vyas, Caitlin Amanda Correll, Sean David McCracken, William V. McGehee
Abstract: This disclosure provides methods and systems for locating wireless mobile devices in an area, for example, using an unmanned aerial vehicle. An unmanned aerial vehicle may receive a wireless signal with identification information from a mobile device. The unmanned aerial vehicle may fly in wireless communication range with the mobile device; measuring, representative parameters related to the received wireless signal and associating the value of measured representative parameters with the mobile device identification and with the location of the unmanned aerial vehicle at the time the wireless signal was received and estimating, the location of the mobile device in the area based on the value of measured representative parameters and the location of the unmanned aerial vehicle at the time it received the wireless signal.
Abstract: A mechanism is hereby disclosed that, when activated in the linear direction of its axis, will expand and contract radially. The novel nature of the device is that of compliant methods and materials used in its design. Compliant members, referred to as dyads, translate the motion and imply resistance in a single structure. Thus eliminating the need for separate members, hinges, pins, springs and the associated assembly. When these compliant dyads are combined in the novel configurations hereby disclosed, a device is created that expands (or contracts) in multiple directions from its primary axis of actuation. Furthermore, one or more actuation dyad sets could be arranged at various angles relative to the global vertical axis. The radial expansion/contraction can be 2D or 3D by adding more primary activation dyad sets. Such a device can be applied to many applications and industries. One such application is for gripping the inside of a tube or object for moving manually or in automation.
Abstract: A method for avoiding collisions between two robots providing first movement information related to a first robot movement; determining for a plurality of second robot movements whether they involve a risk for collision between the first and second robots; and executing one of the second robot movements. Information about a movement of one robot enables a robot controller of another robot with an overlapping work area to select among available robot movements an appropriate one that does not involve a risk for collision between the two robots.
Abstract: A service providing system is equipped with a robot, a robot time zone management unit configured to specify a time zone in which there is no action plan for the robot itself, an activity specifying unit configured to specify an activity that has yet to be experienced by the robot itself or a subject and that is of high priority, and a location specifying unit configured to specify a location where the activity is performed. In the specified time zone, the robot moves to the specified location, performs the specified activity, and provides information obtained through the activity, to the subject.
Abstract: Provided is a method for establishing and maintaining a user loyalty metric to accesses a plurality of robotic device functions including: receiving biometric data associated with a user; authenticating the user; providing a time access memory, wherein the time access memory comprises a plurality of memory cells; assigning a predetermined time slot to each of the plurality of memory cells, wherein each of the plurality of memory cells is available for writing only during the predetermined time slot, after which each memory cell is made read-only; storing the biometric data of the user if the user is authenticated within a currently available memory cell of the time access memory; increasing the user loyalty metric if the user is authenticated; and, providing access to the plurality of robotic device functions in accordance with the user loyalty metric.
Type:
Grant
Filed:
December 14, 2018
Date of Patent:
September 1, 2020
Assignee:
AI Incorporated
Inventors:
Ali Ebrahimi Afrouzi, Amin Ebrahimi Afrouzi, Masih Ebrahimi Afrouzi, Soroush Mehrnia, Azadeh Afshar Bakooshli
Abstract: An asset inspection system includes a robot and a server. The server receives a request for data from the robot, wherein the requested data comprises an algorithm, locates the requested data in a database stored on the server, encrypts the requested data, and transmits the requested data to the robot. The robot is configured to collect inspection data corresponding to an asset based at least in part on the requested data and transmit the collected inspection data to the server.
Type:
Grant
Filed:
November 6, 2017
Date of Patent:
September 1, 2020
Assignee:
GENERAL ELECTRIC COMPANY
Inventors:
Huan Tan, Li Zhang, Romano Patrick, Viktor Holovashchenko, Charles Burton Theurer, John Michael Lizzi, Jr.
Abstract: A method and system for docking a robot with a charger docking station, including receiving an initial pose and receiving a mating pose associated with the robot charger docking station, performing a first navigation from a location to the initial pose, and performing a second navigation of the robot from the initial pose to the mating pose. The second navigation may proceed substantially along an arc path from the initial pose to the mating pose, thereby, upon arriving at the mating pose, an electrical charging port of the robot mates with an electrical charging assembly. The arc path may be associated with a section of a unique circle having a radius and a center equidistant from the initial pose and the mating pose. Controlling for error may include a proportional control and/or weighted control or switching between the controls to maintain an error below a threshold.
Type:
Grant
Filed:
November 22, 2017
Date of Patent:
September 1, 2020
Inventors:
Thomas Moore, Bradley Powers, Hian Kai Kwa
Abstract: A sensor detection error at a joint of a robot arm is correctly detected. A joint structure that joins links and of a robot arm includes a sensor for determining force acting between the links. A driving apparatus that generates a driving force of a joint includes first and second driving parts. A constraining part that constrains the joint movable in a driving direction of the joint and be unmovable in another direction includes first and second supporting parts that are movable relative to each other in the driving direction of the joint. The driving part of the driving apparatus is fixed to the link, and the supporting part of the constraining part is fixed to the link. Also, the supporting part of the constraining part is fixed to the driving part of the driving apparatus. The sensor is fixed so as to link the supporting part and the link.
Abstract: In one embodiment, a distributed oscillator computes a first error value based on a first state value for the distributed oscillator and first state values for other distributed oscillators, and computes a second error value based on a second state value for the distributed oscillator and second state values for the other distributed oscillators. The distributed oscillator computes a new first state value for the distributed oscillator based on the first error value, the first state value for the distributed oscillator, and the second state value for the distributed oscillator, and computes a new second state value for the distributed oscillator based on the second error value, the first state value for the distributed oscillator, and the second state value for the distributed oscillator. The distributed oscillator transmits the new first state value and the new second state value to the other distributed oscillators.
Type:
Grant
Filed:
June 28, 2019
Date of Patent:
September 1, 2020
Assignee:
Intel Corporation
Inventors:
Jose I. Parra Vilchis, Anthony K. Guzman Leguel, David Gomez Gutierrez, Leobardo E. Campos Macias, Rafael de la Guardia Gonzalez, Rodrigo Aldana Lopez
Abstract: A device intelligence architecture configures and controls on-site devices and performs environment monitoring to facilitate effective device functionality. The architecture facilitates efficient use of devices (e.g., robotics) in an unstructured and dynamic environment, which will allow deployments in many more environments than is currently possible. The architecture stores and shares information between segregated devices to avoid the silo effect of vendor specific stacks. The architecture also models capabilities of devices and maps actions to intelligence packages to deploy a specific intelligence package to the device. The architecture also implements distributed processing. For instance, computationally intensive tasks may be offloaded to the back end processing, with action updates resulting from the processing pushed to the devices.
Type:
Grant
Filed:
September 3, 2015
Date of Patent:
August 25, 2020
Assignee:
Accenture Global Solutions Limited
Inventors:
Pramila Mullan, Michael Mui, Anuraag Chintalapally, Cindy Au
Abstract: A moving robot including: actuators at least including a motor for movement; a reading unit configured to read a tag installed in an environment, at least one of information on an allowable operation time of the actuators and information on an allowable operation amount of the actuators being described in the tag; and a controller configured to prohibit or limit execution of a predetermined task whose execution has already been accepted, the predetermined task being operated using at least one of the actuators, until the time when the reading unit reads the tag, and release the prohibition or the limitation and execute the task in such a way that an operation time and an operation amount do not exceed the allowable operation time and the allowable operation amount described in the tag after the reading unit has read the tag is provided.
Abstract: A semiconductor wafer transport apparatus includes a frame, a transport arm movably mounted to the frame and having at least one end effector movably mounted to the arm so the at least one end effector traverses, with the arm as a unit, in a first direction relative to the frame, and traverses linearly, relative to the transport arm, in a second direction, and an edge detection sensor mounted to the transport arm so the edge detection sensor moves with the transport arm as a unit relative to the frame, the edge detection sensor being a common sensor effecting edge detection of each wafer simultaneously supported by the end effector, wherein the edge detection sensor is configured so the edge detection of each wafer is effected by and coincident with the traverse in the second direction of each end effector on the transport arm.
Abstract: An implement operating apparatus has a U-shaped drive frame supported on drive wheels, each pivotally mounted about a vertical wheel pivot axis. A steering control selectively pivots each drive wheel. A power source is connected through a drive control to rotate the drive wheels in either direction. First and second implements are configured to perform implement operations and to rest on the ground and when the drive frame is maneuvered to an implement loading position with respect to each implement, the implement is connectable to the drive frame and movable to an operating position supported by the drive frame. When the implement is in the operating position, the steering and drive controls are operative to move and steer the drive frame and implement along a first travel path or a second travel path oriented generally perpendicular to the first travel path.
Abstract: Provided are methods and control units for building a database and for predicting a route of a vehicle, and estimating length of the predicted route. The method comprises determining geographical position of the vehicle; detecting a cell border of a cell in a grid-based representation of a landscape, in a database, corresponding to the geographical position; determining that the vehicle is entering the cell at the cell border; extracting a stored driving direction at the cell border from the database; detecting a cell border of a neighbor cell, in the driving direction at the cell border; repeating step and; predicting the route of the vehicle; and estimating the length of the predicted route by adding an estimated distance through each cell of the predicted route.
Abstract: The purpose is to provide a tactile information conversion device, a tactile information conversion method, and a tactile information conversion program, which are usable for general purposes by presenting or sensing an arbitrary tactile feeling. In order to provide tactile information to an output unit capable of outputting physical quantities including electricity, force, temperature, vibration, and/or time and space, at least two or more of the physical quantities are selected according to a tactile feeling to be presented, tactile information for presenting the predetermined tactile feeling is generated based on the physical quantities that have been selected, and the tactile information that has been generated is output to the output unit.
Abstract: A system that incorporates teachings of the subject disclosure may include, for example, a method that identifies first and second gestures of first and second viewers in a proximity of a media center and associates the first and second gestures with first and second command A conflict is determined between the first and second commands and in response a notification is provided via the media center. The notification requests a resolution to the conflict. A cue is detected from a viewer responsive to the presenting of the notification. The cue identifies a selected one of the first viewer or the second viewer and control of the media center is assigned to one of the first viewer or the second viewer responsive to the cue. Other embodiments are disclosed.
Abstract: A method for controlling a human-robot collaboration (HRC) system wherein the HRC system includes at least one manipulator having an end effector. The method includes using the end effector in a first operating mode, wherein the end effector is operated with reduced power; monitoring whether a desired object is manipulated when the end effector is used in the first operating mode; and increasing the power used to operate the end effector in order to use the end effector in a second operating mode when the monitoring indicates that the desired object is being manipulated.
Abstract: The invention relates mainly to a seat module intended to be installed in an aircraft cabin comprising: a seat, a cushion positioned near the seat, and an armrest that can move between a lowered position, and a raised position, wherein the armrest comprises an upper wall and a lower wall, such that the upper wall extends in line with the cushion when the armrest is in the lowered position, and the lower wall extends in line with the cushion when the armrest is in the raised position.
Type:
Grant
Filed:
April 5, 2016
Date of Patent:
August 4, 2020
Assignee:
Safran Seats
Inventors:
Christophe Ducreux, Charles Ehrmann, Benjamin Foucher
Abstract: A robotic cleaner includes a cleaning assembly for cleaning a surface and a main robot body. The main robot body houses a drive system to cause movement of the robotic cleaner and a microcontroller to control the movement of the robotic cleaner. The cleaning assembly is located in front of the drive system and a width of the cleaning assembly is greater than a width of the main robot body. A robotic cleaning system includes a main robot body and a plurality of cleaning assemblies for cleaning a surface. The main robot body houses a drive system to cause movement of the robotic cleaner and a microcontroller to control the movement of the robotic cleaner. The cleaning assembly is located in front of the drive system and each of the cleaning assemblies is detachable from the main robot body and each of the cleaning assemblies has a unique cleaning function.
Type:
Grant
Filed:
July 5, 2017
Date of Patent:
August 4, 2020
Assignee:
iRobot Corporation
Inventors:
Nikolai Romanov, Michael J. Dooley, Paolo Pirjanian
Abstract: Method for controlling movement of a mobile device includes obtaining an analyzeable video from an imager on the device during its movement by obtaining at least one video from the imager, and analyzing, using a processor, each video to determine presence of a fixed-in-position object in multiple sequentially obtained frames until a video is obtained including at least one fixed-in-position object in multiple sequentially obtained frames which constitutes the analyzeable video. This video is analyzed on a frame by frame basis to determine distance and direction moved by the device, which is analyzed relative to predetermined distance and direction intended for movement of the device to determine any differences, which result in changes in movement of the device. Relocation of the device is achieved by recognizing a previously imaged, fixed object in subsequent frames and comparing the position of the device at both times, with a deviation resulting in relocation.
Abstract: To provide an action information learning device, robot control system and action information learning method for facilitating the performing of cooperative work by an operator with a robot. An action information learning device includes: a state information acquisition unit that acquires a state of a robot; an action information output unit for outputting an action, which is adjustment information for the state; a reward calculation section for acquiring determination information, which is information about a handover time related to handover of a workpiece, and calculating a value of reward in reinforcement learning based on the determination information thus acquired; and a value function update section for updating a value function by way of performing the reinforcement learning based on the value of reward calculated by the reward calculation section, the state and the action.
Abstract: A method for map constructing, applicable for real-time mapping of a to-be-localized area provided with at least one laser device, includes taking a position of a mobile electronic as a coordinate origin of a map coordinate system, when a center of a mark projected by a first laser device coincides with central point of CCD/CMOS; moving the mobile electronic device with the coordinate origin as a starting point to traverse the entire to-be-localized area, calculating and recording coordinate values of a position of one of obstacles each time when it is detected by the mobile electronic device; and constructing a map based on recorded information of mark and corresponding coordinate values and the coordinate values of the position of each said obstacle after the traversing process is finished.
Abstract: The present invention discloses a safety protection method of dynamic detection for mobile robots. The mobile robot is provided with a sensor. Said sensor obtains the obstacle information in the detection areas in front of a mobile robot, and the mobile robot is caused to progressively slow down and dynamically adjust the detection area when an obstacle appears in the detection area. If no obstacle is detected in the detection area after adjusting, then the mobile robot is caused to keep on moving, and if an obstacle is still detected in the detection area after adjusting, then the mobile robot is caused to keep on decelerating until they are stopped. The sensor sets different detection areas according to the traveling speed and traveling direction of the mobile robot, or presets the detection area according to the path and dynamically adjusts it when the mobile robot is running.
Abstract: A wet cleaning apparatus has a cleaning element for mechanically wet cleaning an area to be cleaned and a device section that supports the wet cleaning apparatus relative to the area. The wet cleaning apparatus comprises a displacement device that is designed for automatically causing a displacement of the cleaning element relative to the device section or vise versa in dependence on a state of motion and/or an error status of the wet cleaning apparatus such that the cleaning element can be displaced from an operating position, in which it is lowered onto the area, into a distant position, in which it is lifted off the area. A detection device is assigned to the displacement device and is designed for distinguishing between a standstill of the wet cleaning apparatus and a motion of the wet cleaning apparatus.
Abstract: Methods for characterizing living plants, wherein one or more beams of penetrating radiation such as x-rays are scanned across the plant under field conditions. Compton scatter is detected from the living plant and processed to derive characteristics of the living plant such as water content, root structure, branch structure, xylem size, fruit size, fruit shape, fruit aggregate volume, cluster size and shape, fruit maturity and an image of a part of the plant. Ground water content is measured using the same technique. Compton backscatter is used to guide a robotic gripper to grasp a portion of the plant such as for harvesting a fruit.
Type:
Grant
Filed:
October 18, 2019
Date of Patent:
July 14, 2020
Assignee:
American Science and Engineering, Inc.
Inventors:
Aaron Couture, Calvin Adams, Rafael Fonseca, Jeffrey Schubert, Richard Mastronardi