visit
This article covers the introduction of the very basic formulation of the major sensor-servo problem and then it presents its most common approaches like touch-based, vision-based, distance-based, and audio-based control.
Cobot control is an established field that has already witnessed significant industry commercialization. The techniques needed to control human-cobot interaction and collaboration, nevertheless, are still being developed. Research on these topics is being performed in the disciplines of collaborative robotics and physical human-cobot interaction (pHRI). Three stacked levels of consistent behaviors have been explained by the authors of De Luca and Flacco's model for the cobot to attain safe pHRI:
a.) Safety
The first and most significant characteristic of Cobots is safety. Although recent efforts have been made to standardize cobot safety, we are still in the early phases. Collision avoidance is a common method for addressing safety. This feature calls for high responsiveness, i.e., high bandwidth, along with robustness at both the control and perception layers.
b.) Co-existence
The ability of a cobot to coexist with people in the workplace is known as co-existence. This covers situations where a cobot and a human cooperate on the same activity without physical touch or coordination, such as in medical procedures where the cobot intervenes in the patient's body.
c.) Collaboration
Collaboration is the capacity to use direct human interaction and Collaboration to complete cobot’s tasks. There are two ways to do this: contactless cooperation and physical collaboration, which involves explicit and intentional physical touch between humans and Cobots (where all the actions are exclusively guided by the exchange of information, e.g., in the form of voice commands, body gestures, or any other modalities).
Establishing tools for intuitive control by human operators, who may be non-expert users, is essential, especially for the second mode. In order to engage more organically from a human perspective, the cobot must be proactive in carrying out the desired activities and be able to deduce the user's intentions.
In the field of robotics literature, there are two major approaches for task execution that have emerged successfully sensor-based control and path/motion planning.
Since sensor-based control closes the perception-to-action loop at a much lower level as compared to path/motion planning. It is more effective and adaptable for pHRI. It should be emphasized that control techniques have their roots in the servomechanism problem and have strong similarities to the functions of our central nervous system. The best-known example is image-based visual sensing, which uses visual feedback alone to directly control the cobot’s motion without the need for a cognitive layer or a detailed environment model.
Recent advances in bio-inspired measuring technologies have reduced the cost and weight of sensors, thereby facilitating their usage on Cobots. These sensors include force/moment transducers, tactile skins, RGB-D cameras, and others. Depending on the task at hand, the works reviewed here use various combinations of sensory modalities. Overall there exist 4 types of cobot sensors, as mentioned previously:
a.) Touch-based
This inculcates techniques for analyzing and comprehending visual data to generate numerical or symbolic information that mimics human vision. Despite the complexity and high computing cost of picture processing, this sense's richness is unmatched. is essential for comprehending the surroundings and human intentions so that it can respond appropriately.
b.) Vision-based
Proprioceptive force & tact are both considered forms of touch, with the latter involving actual physical contact with an outside object. The perception of proprioceptive force is comparable to that of muscle force. Cobots can determine it by measuring joint position errors or by using torque sensors built into the joints. By depending on force control, it may then employ both techniques to deduce and adapt to human goals. On the other hand, human touch or somatosensation is caused by the activation of neural receptors, usually in the skin. These have served as design inspiration for tactile artificial skins that are extensively employed in human-cobot Collaboration.
c.) Distance-based
The only sense out of the other three that humans are unable to measure directly is this one. However, there are several instances of echolocation in the animal kingdom (such as in bats and whales). Cobots use capacitive, optical, ultrasonic, or ultrasonic sensors to measure distance. The importance of this particular "sense" in human-cobot Collaboration is driven by the correlation between safety and the separation of hazards.
d.) Audio-based control
Binaural audition is used to conduct sound localization in humans. We can establish the source's elevation and horizontal position by taking advantage of auditory clues in the form of phase/level/time changes between the left and right ears. Cobots can "blindly" locate sound sources. Thanks to microphones, which artificially imitate this perception. Even while two microphones mounted on a motorized head are the norm for robotic hearing, other non-biological designs like a head equipped with a single microphone or an array of multiple omnidirectional microphones exist.
Conclusion and Future Perception
Although touch and vision emerge nowadays as one of the most popular senses for collaborative robots, the advent of precise, cheap, and easy-to-integrate tactile, distance and audio sensors present some of the most astonishing opportunities for the future. Typically, we believe that the robot skins (e.g., on hands and arms) will simplify interaction, thereby amplifying the opportunities for human-robot Collaboration over the upcoming years.