Table of Contents
- Fully Autonomous Control
- Direct Control or Fully Autonomous Control: Limitations
- Price for an Essay
- Key Design Issues
- Human-Robot Interaction
- Be aware of the fact that different users possess different abilities
- Implementation Design of Autonomous Obstacle Avoidance
- Related Free Technical Essays
1.1 Conventional System Model
There exist three system models that are used to operate vehicles: fully autonomous control, direct control and supervisory control. The direct control model became widespread in the 1900’s. Supervisory control, originally known as meta-control, was developed in the 1960’s as a way to distinguish operators’ functioning in discontinuous control loops (Sheridan 1992). Fully autonomous control describes robot systems that rely on people to set high-level aims or select a high-level strategy, but which then operate independently of the person.
1.1.1 Direct Control
Direct control is known as the most common method of providing vehicle tele-operation: it suggests that the operator should drive the remotevehicle, with the help of , hand-controllers while the video on one or more displays is being monitored (see Figure 2). Submersibles (ROVs), aircraft vehicles (RPVs), and ground transport (UGVs) have operated throughthe direct control method. Indeed, for a number of these vehicles this approach is still up-to-date.
In comparison with other system models, direct control is relatively easy to implement and cheaper to operate. It suggests that the operator should close the comtrol loop. (See Figure 3). The biggest responsibility rests on the human although the robot (or it may be an interface) may help with finding effector or sensing paths. Specifically, the individual controls remote environment perception, making various decision, and acting out the plan.The direct control is known to be a cause of some problems. Just as every control decision is dependent on a man, the performance is closely related to human capabilities. In terms of effectiveness of the direct system functioning, such thing as control station design or, possibly, communication bandwidth are important factors. Other influencing factors are sensor motor limits, training, skill, knowledge, etc.
Fully Autonomous Control
Fully autonomous control is actually a name applied somewhat wrongly since it rarely operates in a fully automatic way. In this system model, the robot receives different tasks and high-level goals from the human in order to independently achieve them . Planning can be done before the execution, it may be interleaved with it, or it may be done during the task performance. .The researchers into the sphere of autonomous mobile ronbotics have sought to achieve fully autonomous control since the days of the Stanford cart and SRI’s Shakey.
To specify, fully autonomous control is when the robot itself achieves a discrete, high-level goal set by a human being. Very often in fully autonomous control, goals are quite abstract allowing the robot take more responsibility concerning how exactly the task will be completed. . Fully autonomous control is, as a rule, implemented with the help of robot control architecture (Hasemann 1995). Figure 5 displays a robot controller with just two layers: the layer which is lower is quite close to the actuators/sensors and is known as “reactive”; at the same time, the second layer (the top one) is slower and suggests more abstract reasoning; it is also more deliberative.
The human deals with the robot through the user interface (Figure 4). As it has already been mentioned, if a person in charge has given a goal to be achieved, the robot acts independently (for example, it may start closing all control loops). During the time of robot’s operation, a person in charge is supposed to monitor the performance through the display of the interface.
Direct Control or Fully Autonomous Control: Limitations
These system models suffer from the restriction of “robot as tool” / “human as controller” mode despite the fact they have been used in a successful manner. It is only when a human stays in the loop that the tasks can be accomplished with the help of direct control. To add, operator’s skills, as well as the overall workload are factors that limit the performance. If one takes the autonomous control, the operation of a robot takes place independently. This means that sometimes incorrect decisions are made , such as choosing a longer route in order to meet the set goal.
1.1.2 Supervisory Control
Supervisory control suggests dealing with with the robot via the user interface. Having received the goal, the robot acts as a controller either in major or minor way, which actually depends on the autonomy level.
The concept of supervisory control developed as a part of research on how operators may operate lunar vehicle being based on earth. This term is derived from the analogy between a supervisor’s interactions with the staff subordinate to him/her (Sheridan, 1992). In order to enact the supervisory control, the operator breaks the problem into a range of sub-tasks that are later executed by the robot.. Today, virtually every research into the subject of the supervisory control revolves around tele-manipulation along with process-control. The higher the competence of the robot, the longer is the period it can operate in autonomous (in other words, without human interference)..Monitoring interface displays is the way for operators to supervise task execution after the robot has been endowed with control. The displays mentioned above provide access to the visual image of data that are being processed by either one or a few robot perception models. (Figure 5).
Figure 6 shows the supervisory control system model. Sheridan (1992) says that the human is not limited to a mere supervisory role.
Instead, the human can execute intermittent control of the robot via closing what is known as comman? loop (also: sensor-perception-display-control-cognition-actuator). In addition, the operator may control certain variables having left the rest to the the robot The former method is called “traded control”, whereas the latter method is called “shared control”. In any case, it is possible for an operator to interact with the given robot at a variety of levels. For instance, when a robot functions with a hierarchical controller, the operator may close the mentioned control loop at a relatively high and symbolic level (for instance, the top layer of robot perception-cognition illustrated by Figure 6) or at a level that is lower, which is closer to actuators.
Key Design Issues
The supervisory control makes us re-examine the ways of designing, building and utilizing the systems of tele-operation. Yet, rather than merely developing from the human perspective, we’d better taken a broader view. One needs to take into consideration robotic perspectives and design system. Despite the fact there may be found a plenty of design questions which can be asked, one needs to expore the key issues that have to be addressed to enable construction of a useful system.
The robot can freely uses human as an assistant in satisfying its needs, under the supervisory control. However, as a consequence, this needs to be a robot which is aware. It does not actually mean that the robot nees to be completely sentien, it rather means that the robot should at least be able to fix when it needs to ask the operator for assistance. It should also be aware that getting help may take much time. Therefore, if the robot is likely to collide with some kind of obstacle, this does not make any sense for it to ask the operator whether it is supposed to stop since it is probable that the damage may have already taken place by the moment the operator replies.
The following questions should be answered in order to make the robot aware: At which exactly level (and in what precise manner) does the robot need to model the man? How does the robot determine that its the actions it planned beforehand or its selected solution is sub-optimal?
A robot should be self-reliant if it operates under the supervisory control. Given the fact that it is impossible for the robt to always depend on people’s availability or provision of accurate information, it is obliged to be capable of maintaining its safety. To specify, the robot must be designed in such away that he could avoid hazards including collions or rollovers. It should be able to act with the aim to safeguard itself if a need arises.
Under this type of control, it is quite likely that the operator will not have a chance to direct each command since their capacity is limited. Therefore, the robot has to be able to deal with the question that has not been answered through proceeding by itself.
The conventional roles of operator and the robot change when using the supervisory control system. Rather than acting as a subordinate expecting to be directed, the robot is an associate looking for guidance. If the robot asks the human for advice or assistance, it is suggested that the human may not be capable of providing it.. This means that people do not need to superviseor control the robot in a continuous manner. If the operator is at work, he/she can give directions. At the same time, the system can function alone if the person is unavailable.
One of the main points is that human-robot interaction can be dynamic and surprisingly fluid under this kind of control. At the same time, this is both beneficial and problematic. Its benefit is in its ability to engage human perception as well as human cognition without demandingresponses that are time-critical. On the other hand, it is quite problematic because it suggests that t the relationship between the robot and the manbe not static (for example, roles and expectations). Since the robot may act without human instruction before it operates (in order to guarantee its safety, for example), it is believed to be a challenge for a human being to construct the robot’s mental model or make a prediction as to the manner in which a given task is going to be accomplished.
1.1.5 User Interface Design
In traditional tele-operation, the operator benefits from utilizing the user interface: displays show information in order to help human to make a decision., plus mode changes happen to be triggered by users. Yet, in the supervisory control system the user intefface is supposed to serve the robot too. Specifically, the interface may be expected to provide certain mechanisms in order for the robot to be able to draw user’s attention.
User interfaces are typically designed with user-centered methods nowadays. The basic goal in user-centered design is to provide support to human activity: to help humans to do things faster, with greater quality and fewer mistakes (Newman, 1995). A range of usability metrics or human performance (error rate or, in other cases, speed of performance), as a rule, is used to guide the design process. This method may be used to successfully develop a supervisory control interface.
Assessing the value of any design suggests its evaluationDespite the fact the evaluation can be used to serve various functions its most common usage is for conducting analysis of strengths and weaknesses, to find limitations and estimate how well this system works.. The goal of evaluation should, however, be not limited by computation of a single measure of a given system. Its task should be to provide a kind of informed critique to improve the system. It seems particularly useful when one attempts to analyze an innovative system.
Therefore, to rely on a formative approach is a viable way to evaluate the the supervisory control. It is necessary to find and apply evaluation methods that give an opportunity for people to learn the design and make it better. Particularly, the purpose of the evaluation should be to give answers to these questions: Are there any problems with the existing design? And what changes should be made?
The challenge for supervisory control, hence, is to find adequate evaluation techniques that will facilitate the the system’s advancement and ensure better comprehension of the concept.
The aim should not be to represent plain performance metrics for a certain task, but rather to make sure that the control is a useful pattern for tele-operation.Want an expert to write a paper for you Talk to an operator now
This section of the paper has discussed vehicle tele-operation - the underpinning context of this project Vehicle tele-operation is about controlling a vehicle at a distance, to put it simply. Despite the fact some people confine the term tele-operation just to manual control, I believe
that tele-operation embraces the systems with autonomous functionscontrol, namely supervisory control.
The traditional vehicle tele-operation nowadays employs three basic system models: supervisory control, direct control and fully autonomous control. There exist some restrictions of full autonomous control and direct control method. The latter suggests that every control decision should be human-dependent so that the overall system performance is related to subjective capabilities of human operators. At the same time, if one takes the fully autonomous control everythings is robot-dependent including important decisions, so that the overall performance relies totally on a robot since the interaction between the man and the robot is absent. In this context, the benefit of the supervisory control is determined by its ability to address the gaps in both kinds of control. It enables people to deal with the robot on a range of levels of control.
The supervisory control can make tele-operation more effective and flexible. It means that this type of control is especially useful in unknown or changing conditions, when the tasks which are difficult to solve need to be solved and the users with various qualifications are involved.
It is necessary to think over the robot’s point of view if one wishes to significantly develop the supervisory control system, so that it responds not only to human requirements but also to the robotic needs, too. , The important design issues to be deal with are self-reliance, user interface design, awareness, human-robot interaction, and adaptability.
2.1. Chapter 2: System Design
The following section describes the implementation and design of an improvement of supervisory control system. The design consists of three types of system modules which work autonomously and which are interlinked with an inter-process communication toolkit.
Robot module maintains the control system for mobile robot operation that can accomplish different tasks in the environemnts that are unstructured, without continuous human help. For instance, the robot manages to avoid a range of situations that threaten people, some kind of property or itself. .
Interface module maintains a visuals display for the person to converse with the robot. The interface module allows the human to communicate and help the robot.
Bluetooth module, which is known to be a common configuration for direct control system, is used to provide communication between two parties. Through the described interface the human could control the robot by clicking on each functionality
This section starts with general information about the general design, concentrating on design approach, design principles, and system architecture. A description of the Robot, Interface and Bluetooth modules aimed at maintaining vehicle tele-operation is provided.
2.1.2 Principles of the Design
It is necessary to construct a set number of design principles that ground on the ideas of human-robot interaction, human-to-robot channel of communication and supervisory systems in order to create the supervisory control system. Three guidelines have been developed by researchers who work on human-robot interaction:
- · Endeavor to make robot more human, in the way that the robot may freely operate, easily catch the command that is given and act according to the command
- · The robot must be able to pay attention to its own safety irrespective of the human participation in giving commands (for instance, if the robot discover an obstruction in front of it the robot should act on its own instead of waiting for the human instruction).
- · Mediate interaction via a user interface (without any proximal interaction).
The idea that the human and the robot should be able to operate productively and efficiently as a unifiedteam in supervisory control systems is linked to these ideas:
- · Human-robot communication should focus only on task-relevant details, which will enables it to act in an effective manner;
- · Take into account the fact that people and robots have substantially assymetric abilities;
- · Employ autonomy as a certain function of a given situation: the human’s reaction should fuel autonomy;
Be aware of the fact that different users possess different abilities
An understanding of how channel of communication structure and flow can lead interaction between the human and the robot was obtained after determining design principles. Firstly, it is important to focus on vehicle mobility issues (for instance, obstacle detection, navigation, or avoiding of collisions). Hence, the design includes command tools for among others remote, involving command driving and controlling tasks.
Secondly, to ensure effective robot-to-human channel of communication, It was decided to stress the role of support of functions that operate autonomously. . This means that the robot may receive human help while achieving goals.
Finally, certain things needed for human-robot communication channel to directly influence the system operation was determined.
. In particular, the creation of a system is required that would alter its conduct on the basis of the command given by the operator to the acting robot. Hence, the design is to incorporate the robot controller that stresses autonomy and particular awareness. Specifically, when the given robot is functioning it is able to modify its conduct (in other words, to increase the autonomy levels on the automated basis). This means when the robot is working it can change its behavior if there are no guidelines from the human.
A description of supervisory control in terms of its design was made. The supervisory control isperceived as a distributed rangesetof processing modules, which have specific functions and work autonomously. The modules are interrelated to the robotic vehicle trough the inter-process communication toolkit (for instance, Bluetooth communication as one of the forms of direct control method).
The modules of three types are found in the architecture: robot module, interface, and communication (Figure 7).
Robot modules regulate how the robot communicates and deals with the human. The mechanisms for perception, hardware operation, localization and planning are provided by the control method. The design of robot controllers has been the subject of research for a long time and has been the subject of debate in the robotics community. The primary approach has been to construct a safeguarded robot controller that sustains various levels of robot autonomy and that incorporates models for safeguarding as well as specific task functions. (for example, detection of motion for surveillance).
Communication modules are known as “system” level modules. Theyfunction as a kind of glue between the robot and the operator. This approach was used as a form of direct control. It accounts for enabling the human to instruct the robot in a direct manner.
Interface modules help the human to work with the robot and integrate the operator into the system It intermediates human-robot conversation and gives an opportunity for the human to interact with the robot and provide assistance to it. Moreover, interface modules produce feedback displays that enable the operator to monitor the robot. Interface modules, in addition, process human input (responses, various commands to the robot), they transform and transmit the information to the robot. A development of a user interface for vehicle tele-operation was made during this project.
This section described the principle used for producing a supervisory. The main principle is to construct a robot that could care about its own safety irrespective of human guidelines. In this context. the role of a human was to function as an aid to direct the robot if necessary (for instance, when the robot identifies the situation wrongly , and above all the human and the robot must cooperate efficiently so that they will eventually be able to create a supervisory control system. Architecture of the system was developed to ensure successful communication between the human and the robot with the consideration of all principles. There are three types of modules (communication, interface, and robot) in the system, which are connected with one another through an inter-process communication toolkit.
3.1. Chapter 3: Robot Vehicle System
The majority of robots today are known to be two-wheel differential. This means that all the moment in a robot is conducted through the use of a single set of wheels. The robot is composed of the Arduino POP-168 main board. Importantly, the schematic diagram of the module is shown in Appendix A.
The main board of the Arduino POP-168 gets energy from 4 AA batteries used to power the two DC motor ports along with other sensor ports. These elements are attached to the main board: LCD, a beeper, a couple of push button, two wheels of DC motors, a range of GP2D120 range sensor and a servo motor. This hardware resides on a robot chassis constructed by Innovative Experiment.
The robot is powered by java virtual machine that sustains multi threading.This is very important because it is needed to run numerous objectives at the same time. This POP- BOT uses Arduino IDE Java development environment software to allocate, establish and upload the program to the main board of the robot. This software comes with the API specification of methods and classes that have already been constructed for public use. A serial cable helps to upload the program to the robot. The robot’s main board consists of 10 Digital inputs and outputs, a host Interface (RS-232, 115.2K baud), 3 IC ports, a buzzer, an infrared receiver, a 16x2 LCD, an infrared receiver 7 analog/digital inputs, an infrared receiver, a power switch, 2 push buttons, a thumbwheel, and a wall brick power connector (INEX Robotics 2010).
The robot runs on an ATmega168 on board with 10-bit ADC converter, 512-byte EEPROM, 1KB RAM, 16KB flash program memory,16MHz clock. The API specification observes that the flash memory has a limited life of 10,000 writes and should be used insufficiently, while the 512-byte EEPROM is estimated to have a lifetime of 100,000 writes. The bootstrap loader and the Java Virtual Machine use 4K bytes of Random Access Memory (INEX Robotics 2010).
3.1 Basic Operation of Driving DC Motor
The DC (Digital Convert) motor was used to move the robot left, right, forward or backward. As it was mentioned previously, the DC motor uses PWM signal to move The DC motor moves at the maximum speed if it is set to high, which is 255 duty life cycles.
There are 4 ports pins connected to the robot controller. 2 DC motor for each pin port was used in order to perform movementThese pins are set as an output, as a result to the movement. For the first DC motor pin 3 was employed as the forward movement of DC motor, which had a wheel attached to it and pin 5 for the backward movement. The second DC motor (pin 6) was used to move the second wheel forward and pin 9 for the backward movement.
While changing the pulse width given to the DC motor, it can decrese or increase the the quantity of power supplied to the motor, therefore, increasing your speed. the motor speed. The voltage has a fixed amplitude, with a variable duty sign. This prompts at the fact that when width the pulse is higher, the speed also rises.
The Vs sends PWM command to DC motor; the spped depends on Ton time. During this time, DC motor will get the full voltage. If Ton’s width is bigger, DC motor will get more voltage. It revolves in high speed. Duty cycle is the ratio of Ton time in percentage with period (T).
This can be calculating as follows:
Despite the fact that the duty cycle is determining the motor speed,the DC motor can function at limited rate. If the PWM frequency is excessive, DC motor will stop working because its operation gets to ceiling amount. For instance, PWM signal in Equation 2 is known to have 20 milliseconds period and 50Hz frequency.
3.2 GP2D120 Infrared Distance Sensor
3.2.1 Reading GP2D120 with A/D Converter
The GP2D120’s producing tension will soften according to the determined interval. For instance, Vout 0.5V is equal 26cm distance, while Vout 2V is equal 6cm. distance. The result of interconnection with A/D converter module within microcontroller is reference data from the A/D processing. It is essential to apply the software in order to convert raw data into the exact distance.
3.2.2 How to Read Data from GP2D120 of the Robot
Arduino software has an exclusive function to read the value from analog port. It is an analogRead function. The value in braces is analog to the input number (0 to 7), however, the Robot has only from 3 to 7. The analogRead function is a returnof the integer data from 0 to 1023. It is 10-bit A/D converter result.
3.3 Servo Motor
The servo motor was joined to a webcam and was used to rotate from left, to give an opportunity for the robot to examine and evaluate different angles around its location, and also give the operator the capacity to have a broader view of the surrounding. The servo provides a movement range of 0 to 180 degrees.
The servo turn rate, also known as transit time, is utilized for defining servo rotational velocity. It is the amount of time while the servo can mover some amount, as a rule 60 degrees. The robot servo transit time is around 0.37 sec/ 60 degrees, there is a camera load on top of the servo. It signifies that it would take more than half a second to induce the rotation of 180 degrees.
3.3.1 Arduino Software Controlling Servo Motor
Software Servo library is the main point of operating the servo motor of Arduino POP-168. The robot hardware does not exploit the PWM pin to make the servo motor output. Therefore, general purpose port was used to servo output; Di7 and Di8.
3.4 Autonomous Robot Control
The robot that was joined to the webcam with servo motor horn will displays the webcam that was added to the servo. There are 9 steps in moving and checking following the illustration below.
The robot will select the highest value as it will be reading the sensor detection value every time, as a result of controlling all these positions. The robot can find the right place of where the closest obstacle is situated since the sensor provides the highest value regarding the position of an existing obstacle
The movement of webcam added to a servo enables operators to see the the robot’s surrounding area. The operator has a possibility to make a decision about the next step of the given robot thanks to the fact the servo is provided with 6 seconds delay for each interval. If the human does not give any reply for the direction, the robot will keep rotating to the right moving to 180 degree; if the servo comes to a position 8 and it has not detected any obstacle , the robot will go forth and move the server backwards to position 0 to begin rotating again.
Along with the movement of the servo and if the sensor identifies the highest value of the 3 to 5 range, this means that an object or some evident obstacle is found in front of the robot. It waits for 10 seconds till the operator take control over the current situation – this is achieved through the robot’s being directed to the right path. In case there isn’t any response on the part of the human, the robot will keep moving back for a second, turn right for 1 more second and eventually move forward to keep the operation going.
Implementation Design of Autonomous Obstacle Avoidance
3.4.1 Simplest Algorithm for Obstacle Avoidance
The easiest way to avoid obstacles without using servo to examine an area, is to have the GP2D120 distance sensor to monitor the way of the object and obstacle existence in 5 cm range. The robot will then move forward if there is no obstacle found. If the robot finds the barrier, it will move backward, next it will turn left and go forward again.
A list of the components needed in order to accomplish the operation was described in this section. Each of the components plays an important role, for instance, 2 DC motors have been utilized to drive the robot backwards, forward, to the right, or to the left. Besides, a webcam has been added to a servo in order to give the operator the ability to see all angles of the robot location. The GP2D120 distance sensor has been used to identify the distance to an obstacle. Furthermore, it has helped to generate an idea of autonomous avoidance of an obstacle by the given robot. Two algorithms provided show the different ways on how to make an obstacle avoiding robot that will work in autonomous manner (Figure 9 and Figure 10).
In comparison to the algorithms developed for this very project, the simple algorithm of a robot that will avoid obstacles autonomously uses a servo to rotate the webcam and sensor to detect distance between the robot and obstacle at a different angle, and if there are no obstacles identified, the servo comes back to the initial position of its scanning area and begins to do this again.