Disclaimer: This is an example of a student written essay.
Click here for sample essays written by our professional writers.

Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UKEssays.com.

Human-Robot Interaction: Design of Adaptive Automation in Teamwork Increasing Trust

Paper Type: Free Essay Subject: Information Technology
Wordcount: 2151 words Published: 8th Feb 2020

Reference this

Introduction

 In today’s world, the advancements of technology are too great to ignore. Within the defense industry there has been an introduction of human-robot teaming, where the use of robotics enhances the work dynamic and provides support to military personnel. According to Oleson et al. (2011), robotic teammates have advanced capabilities and can compensate for the inabilities of humans. However, there are difficulties in making the team work dynamic function efficiently. Robots have the common theme of being thought of as perfect in the eyes of humans. Robots are supposed to reduce the workload of humans and be free of error. Furthermore, there is an increasing demand to place robots in roles of decision-making and situational awareness, where the robot will not be able to solely work alone. Kanda (2012) found the concept of Human Robot Interaction (HRI) is a significant aspect to this teamwork approach. HRI can be described as the field dedicated to understand and designing robotic systems for human use. But the actual interaction of HRI is carried out by communication between human and robot.

Get Help With Your Essay

If you need assistance with writing your essay, our professional essay writing service is here to help!

Essay Writing Service

 Communication is an important factor when considering HRI, but there are other factors—more specifically, the factor trust. Trust is very impactful when developing and cultivating a working relationship in HRI. There is no common language between the human and the robot, there is no conversation or debating differences. There are some factors that are included in human-human trust, that are not included in human-robot trust. Human-robot trust results in either the robot is right or wrong, and based of that humans decide whether or not to trust. The idea of trust is intensely put to the test in teamwork settings, where the robots are being relied on by the human. When a robot fails or has a delay response in a task, the human’s trust is decreasing, which could potentially effect the overall mission or task.

Additionally, robot failure itself has its own problem. The failure of a device that is supposed to have all the correct answers and guide human decision-making is looked at with devastation in the eyes of human. The trust, reliability, and communication efforts that humans put towards or exhaust to robots is then ceased or decreased significantly. It is as though, we forget that robots are man-made and are only programmed to function as it is instructed. These instructions lead to the automation processes the robots use and potentially result them in error. According to de Visser and Parasuraman (2011), imperfect automation can impact an operator of several levels (including trust, reliability, etc.) but the system will still show some level of sufficient performance.

Background

 For this research various backgrounds should be taking into consideration to have an understanding of how this topic can into existence. It will be important to visit topics of Human-Robot Interaction (HRI), trust in human-robot teamwork, trust in automation, and adaptive automation to get a well-rounded perspective on what work has already complete in regards to each topic.

 HRI is a multi-disciplinary topic that involves contributions from other fields such as human-computer interaction, artificial intelligence, social sciences, and robotics. Dautenhahn (2013) summarized HRI as the science of studying and evaluating people’s attitudes and behaviors towards robotic relationships, with hopes to facilitate HRI that are efficient but also acceptable to people, while having the ability to meet the social and emotional needs of users, and respecting human values. Oleson et al. (2011) stated the definition of a “robot”, from the Robot Industries Association (RIA), as “a reprogrammable, multifunctional manipulator designed to move material, parts, tools, or specialized devices through variable programmed motions for the performance of a variety of tasks”. Which concludes the idea that HRI is not a new field, however, it recently began to receive more attention for its newfound ability within teamwork environments. Oleson et al. (2011) “Robotic teammates are advantageous because they are able to enhance the capabilities and compensate for the limitations of humans in extreme environments”(p.175). Robots today are playing a significant role in team dynamics, especially in the aspect of military combat. Robots have the ability to minimize workloads and provide additional information in stressful environments. With improvements in technology and further research, teamwork environments should be more effective. But to get to this level, it will not only take technology, but also humans trusting the robots as well.

 Trust is an important factor when considering human-robot teamwork. The figure present a model overview of individual differences that are accounted for when defining trust. The model represents multiple levels of trust from multiple users in different context scenarios. In summary, it appears that depending on the circumstances of the environment and/or the response from the robot, and how the human interpret each of those determines the trust. Sander (2016) defined trust as thing of expectants or of a promise that is held by and individual or group. It is common that the human idea of trust is applied to relationships like human-human, human-robot, and human-automation. However, Sander brought up a good point about the major differences between human-human and human-robot/automation. Sander (2016) stated in human-human relationships, intentionality can be addressed; whereas in human-robot/automation, machines do not possess intentionality, it is through develop of the machine that it expresses itself in such way.

A different perspective to include in trust is trust calibration. Trust calibration, the match between a system’s capabilities and the human user’s perception of those same capabilities, is the key to appropriate robotic use (Lee & Moray, 1994). When a user has too little trust or too much trust in a robot, it can create problems for the team. Sander (2016) used an example of when users have little trust in the robot, they will intervene in the work or process before necessary; but when users trust too much in a robot it can lead to complacency and neglect to supervise appropriately. This complacency occurs in both robot and automation interactions.

Furthermore, most robot or machinery requires some form of automation. The concept of automation is the idea of saving time by eliminating repetitive processes and using a formula or algorithms in their place for perfect recall. Automation is supposed to make tasks easier and more efficient, handle calculations, decrease workload, and allow decision making to be faster. Up to a certain point automation handle events well, however, it is when automation gets more complex is when we run into issues. Unreliable or trustworthy automation ceases the human-robot trust. In a study, Chen,Barnes,Kenny (2011), the researchers were able to test that when user interacted with a robot with a low reliability percentage (unreliable), it impacted the users trust toward the robot; users could also identify levels of impact.

Find Out How UKEssays.com Can Help You!

Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.

View our services

There are other factors besides trust that is effected when it comes to automation. In some instances with automation there is a need for human operators. Operators may suffer from fatigue and stress for various reason. Lin (2017) stated that“..Operators may suffer from multiple sources of stress, such as long hours, shift work, interface difficulties, inefficiencies in control procedures, and conflict between domestic life or personal demands and military operations; the stress may result from the working environment such as exposure to loud background noise from the 14 cooling systems or individual health and sleep issues.”(p.13-14) These kind of issues tend to effect work performance because so much is require of the operator to ensure other automated processes are functioning properly.

State-of-the-Topic

 There is various research about HRI teaming, imperfect automation, levels of trust in HRI, trust to robotics, and so forth. These studies look at why and what causes failures, other considerations that may effect a certain topic, all the factors need to properly assess a topic, and so on. Throughout my research, my interest lied between HRI, automation, HR-teamwork, AA, and trust. I knew there had to be a way to incorporate all of these entities into one topic. That is when I looked closer into studies that did not readily prove a solution to automation errors in human-robot teamwork (HR-Teamwork) settings. “Adaptive automation increasing trust in teamwork” is focused on changing the team dynamic of HR-Teamwork.

 I would like to change the dynamic by adding a supervision method to the robot. The supervision method will be the addition of the AA being involved in the teamwork setting. The AA will allow control back and forth between a user and a system. Allowing for this control switch to take place gives a case-by-case response or reaction to different events that may occur. Robot cannot be programmed for every and any situation that may occur, so having the ability to switch to a human operator in those time of need may prove to be more beneficial than an automation process running the whole event. Additionally, AA could support a human operator who is supporting multiple systems, devices, or robotics.

Future Directions

 The future direction of this topic and articles similar to is to expand and investigate more on the use of AA. There would opportunity to conduct research on using AA with multiple systems, at different workload levels and see how the operator handles the intensity or how intensity effects the outcome of completed tasks. Another potential idea is to determine at which point multiple operators should be included in managing AA, so that productivity increases before there is any induced stress, work overload, or fatigue. With findings, in the future, there will be a more predictive way for military units to decide how to man and run certain stations depending on the equipment being used. Also, there is the potential to open new ways of operating or programming automation systems.

Work Cited

  1. Chen, J. Y., Barnes, M. J., & Kenny, C. (2011). Effects of unreliable automation and individual differences on supervisory control of multiple ground robots. Proceedings of the 6th International Conference on Human-robot Interaction – HRI 11. doi:10.1145/1957656.1957793
  2. Kidwell, B., Calhoun, G. L., Ruff, H. A., & Parasuraman, R. (2012). Adaptable and Adaptive Automation for Supervisory Control of Multiple Autonomous Vehicles. PsycEXTRA Dataset, 428-433. doi:10.1037/e572172013-089
  3. LIN, J. (2017). THE IMPACT OF AUTOMATION AND STRESS ON HUMAN PERFORMANCE IN UAV OPERATION. 1-148. Retrieved from http://etd.fcla.edu/CF/CFE0006951/Dissertation_Lin_2017_v5.2.pdf
  4. Oleson, K. E., Billings, D. R., Kocsis, V., Chen, J. Y., & Hancock, P. A. (2011). Antecedents of trust in human-robot collaborations. 2011 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA). doi:10.1109/cogsima.2011.5753439
  5. Sanders, T. (2016). STARS. Retrieved April 11, 2018, from http://stars.library.ucf.edu/cgi/viewcontent.cgi?article=6644&context=etd
  6. Vallverdu, J. (2015). Handbook of research on synthesizing human emotion in intelligent systems and robotics. Hershey, PA: Information Science Reference, an imprint of IGI Global.
  7. Visser, E. D., & Parasuraman, R. (2011). Adaptive Aiding of Human-Robot Teaming. Journal of Cognitive Engineering and Decision Making, 5(2), 209-231. doi:10.1177/1555343411410160
  8. K. (2012). Introduction: Human Robot Interaction. Retrieved from http://humanrobotinteraction.org/1-introduction/
  9. Lee, J. D., & Moray, N. (1994). Trust, self-confidence, and operators’ adaptation to automation. International Journal of Human-Computer Studies, 40, 153–184.

 

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

DMCA / Removal Request

If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please: