• Order
  • Offers
  • Support
    • Due to unforeseen circumstances, our phone line will be unavailable from 5pm to 9pm GMT on Thursday, 28th March. Please be assured that orders will continue to be processed as usual during this period. For any queries, you can still contact us through your customer portal, where our team will be ready to assist you.

      March 28, 2024

  • Sign In

Disclaimer: This is an example of a student written essay.
Click here for sample essays written by our professional writers.

Any scientific information contained within this essay should not be treated as fact, this content is to be used for educational purposes only and may contain factual inaccuracies or be out of date.

Minimising Driver Startles from Autonomous Vehicle Take-over Requests

Paper Type: Free Essay Subject: Engineering
Wordcount: 5416 words Published: 14th Jul 2021

Reference this

ABSTRACT

Contemporary research on take over request has not fully transitioned from early stage, inclusive designs to those adhering to individualized levels of response. We found a paucity of research into transitions in conditionally automated vehicles, where drivers have different levels of situational awareness. Studies have shown that physiological measurements on individual drivers may provide better insight into the mental behavior and performance of each driver respectively. The aim of this preliminary study was to provide an initial step toward applying various physiological data sources in the limited take-over time budget, for two common take-over request (TOR) modalities used in conditionally automated vehicle.

Get Help With Your Essay

If you need assistance with writing your essay, our professional essay writing service is here to help!

Essay Writing Service

INTRODUCTION

Conditionally automated vehicles (level 2 and 3 of automation) have been introduced with the aim of rapidly improving functionality, to the degree that highly automated driving will be introduced to the general public within the next few years (VolvoCars, 2017; Tesla Motors, 2017). Conditionally automated vehicles let drivers take their hands off the wheel and take their attention away from the primary task of driving. Currently, legal restraints dictate that a transition to driver take-over is made based on preset requirements or limitations in the autonomous vehicle (SAE, 2014; Lee, 2018.) In the transition, the automated system prompts the driver concerning the vehicle status and asks for a transition to manual driver control. This request for manual control is known as a take-over request (TOR). This requires drivers to take over from the automation system in a given, limited time budget (from the moment alarm goes off until a collision). However, shifting from an active controller role to that of a passive monitor causes drivers to stay out-of-the loop, which can in turn cause a loss of situation awareness (Endsley, & Kiris, 1995) and driving skills in the long-term (Winter, Happee, Martens, & Stanton, 2014). This has been shown in recent investigations into deadly high-level automation accidents (Endsley, 2017; Banks, Plant, & Stanton, 2017), where the predominant cause of safety issues has been not providing adequate warnings to drivers to resume control (Griggs, & Wakabayashi, 2018). Hence, different types of hand-over strategies that actively monitor the most important human factors’ constructs, which influence drivers’ performances – such as drivers’ situational awareness and mental workload (Paxion, Galy & Berthelon, 2014) – and keep drivers vigilant – in the loop – even when attention is on another task for a prolonged time, are missing. On the surface, any number of auditory and visual notifications might be adequate, but studies have shown that certain notification methods may in fact startle and stress the driver leaving the driver in a less capable state to make a life-saving decision as a result of affected situation awareness (Dekker, S. W. (2002); Bliss, J. P., & Acton, S. A. (2003). In this study, we focus on the influence of workload, stress and the alarm type on takeover behavior with the help of physiological monitoring systems.

Autonomous vehicles have not been able to cope with all driving conditions, evidence by recent fatal crashes – in which autonomous systems failed to detect a pedestrian, poorly striped lane, or truck (Claybrook & Kildare, 2018; Levin and Woolf, 2016). Some of these incidents could have been avoided with higher sensors sensitivity and by informing drivers about the abnormalities. As a result, despite the great advancement in the field of autonomous vehicles and the rapid growth in demand for them on the road, increasing the frequency of transitions from manual to autonomous and vice-versa could pose a significantly cognitively overwhelming experience for drivers. However, still most research in Human Factors has not profoundly considered human physiological aspects. On the one hand, considerable research has focused on the time budget, conducted in a driving simulator (Damböck, Weißgerber, Kienle, & Bengler, 2013; Gold, Damböck, Lorenz, & Bengler, 2013) and naturalistic settings (Eriksson, Banks, & Stanton, 2017). On the other hand, mainly, the variation of TOR modality has been investigated (Melcher, et al., 2015; Gold et al., 2017).  Therefore, there is still a need to develop a system to constantly consider a driver’s physiological responses if one is to properly inform a driver.

Although there was a belief that automation could increase mental workload (Young and. Stanton, 2002), a meta-analysis by de Winter et al. (2014) showed controversial results. In fact, increasing the automation level reduced the mental workload. However, separate studies have shown the negative impact of driving with autonomous mode on mental workload, take-over performance, and reaction times (Strand, Nilsson, Karlsson, & Nilsson, 2014; Zeeb, Buchner, & Schrauf, 2015; Bueno et al., 2016). Physiological data has been used as an essential instrument for understanding and interpreting a driver’s mental status. Applying neuropsychological and physiological measurements on drivers to investigate the relationship between mental behavior and performance while taking-over could provide us a profound understanding of what modalities provide useful TOR for autonomous vehicles. Whilst previously mentioned research has explored the best TOR modality, timing budget and driver behavior, relatively few studies have investigated the drivers’ cognitive states at the time of transition.

 

For the purpose of objectively obtaining the psychophysical state of the driver as accurate as possible, there is a need to shift from the simple questionnares to a direct assessment of driver physiological responses and driver behavioral pattern (Chuang, 2015; Jap, 2009). Therefore, this study has taken physiological data into account as the most reliable source of workload, stress and cognitive state analysis. Among all the physiological responses, we selected the following which produce more reliable measurements with a high temporal resoulution necessary to detect vigilance difference in TOR: (1) electroencephalogram (EEG; electrical activity of the brain), (2) Eye-tracker, (3) photoplethysmography (PPG; electrical activity of the heart), and (4) Galvanic skin response (GSR; electrical activity of the skin).

The primary purpose of the research in this paper is oriented to study psychophysical states of the driver by applying various physiological data streams to two common TOR modalities (visual-auditory and generic auditory), in the limited take-over time budget, investigating the modality cause more stress and workload for driver with a role of passive monitoring. This study provides elements of possible support for preventive of conditionally automated vehicles accidents.

METHODS

Driving simulator. The experiment was conducted in a high-fidelity driving simulator with the capability of 360° movements, and the ability to provide real-time feedback to the driver (401cr motion system by Force Dynamics, Fig1.) The simulator was equipped with three 32” LCD screens with 1024×480 resolution, giving 120° horizontal field of view. The driving simulator was controlled by PreScan software. PreScan is widely used in many automotive OEMs and suppliers for concept studied, algorithm development to test advanced driver assistance systems (ADAS) and autonomous vehicles.  

Eye-tracker. In highly automated driving, it is likely that drivers will engage in nondriving-related tasks which eye-tracker can let identifying whether the hazerdous objects in the visual frame were founded by drivers after receiving each of the TOR. It also helps to assess how long the driver focuses on those objects. One of the main components of eye-tracker is observation time which calculated as ratio between the fixation time and the time in which the object (e.g. pedestrian, truck, obstacle, etc.) appears in the visual frame. The smaller the ratio is, the less important (or not important at all) the object is to the driver or the driver failed to detect the object properly. Therefore, in order to capture the driver alertness to capture the objects after reciving TOR, eye-tracker was use. A wearable pair of eye-tracker glasses (Tobii Pro-Glasses 2, Danderyd, Sweden; Tobii Pro-Glasses 2, 2017) with a sample rate of 60HZ were used. This device works wirelessly, which enabled us to capture exactly what the driver was attending to visually. Because the scenarios used consisted of many curves, and the driving simulator moves accordingly, the head-mounted eye tracker enabled us to measure the driver’s gaze behavior accurately. In order to determine visual distraction, defined as not capturing the hazardous objects on the road, participants’ gazes were manually coded with Tobii Pro Studio to the area of interest (AOI).

Electroencephalogram (EEG). In order to measure mental workload and engagement, EEG was recorded using a wireless B-Alert X24 system with 24 channels with the sample rate of 256 HZ. Wireless EEG signals were sent via Bluetooth to the data acquisition system. Also, in order to record the electoral activity of the brain, the sensor strip was placed according to the 10/20 extended standard and the channels were referenced using the mean of the mastoid processes. Analysis of event-related potentials (ERP) triggered by takeover alarm was conducted. Prior to epoching, a band pass filter (0.1 HZ – 30.0 HZ) was applied to reduce linear artifacts in the data. After this, data was epoched at 200ms prior to alarm onset and 800ms following alarm onset, prior to artifact flagging and rejection. ERP data was then centered using the 200ms prior to stimulus onset. Epoched data was first cleaned automatically 100 µV peak-to-peak threshold (with peaks measured and compared within 200ms wide Hanning windows using 100ms steps).

Heart Rate (HR).  Fluctuation of heart rate in the time intervals between the nearby beats which occures as a result of emotional factors such as stress can be measured by variability of heart rate (HRV). In this study we focused on the time-domain indices of HRV by which we could quanify the amount of HRV after reciving TOR. These metrics used in this study include the standard deviation of normal beats (SDNN), root mean square of successive differences between normal heartbeats (RMSSD), number of adjacent NN intervals by more than 50 ms (NN50) and the proportion of NN50 to total number of NNs (pNN50).

Galvanic Skin Responses (GSR). Another physiological data considered in this study is the galvanic skin response (GSR), which indicates the conduction ability of the skin. As skin conductance is balanced by sweat secretion caused by sudomotor activity, any action accounting for muscle activation or automatic nervous system (ANS) like stress can can be objectified by GSR (Lanata et al., 2014). Therefore, in the case of emotional changes such as growing stress level, the magnitude of the electrical resistance of the skin decreases, while the conductance increases. Along with HR parameters, GSR has been proven as one the most valid indicators of stress level. Thus, it was used in this study to indicate the stress level of drivers after reciving TOR. The skin conductance and Heart Rate were captured from the proximal phalanges of the index and the middle fingers of the non-dominant hand using Shimmer3 GSR+. 

Participants

One graduate student (female, 29 yrs old with 5yrs of driving experience) participated in this preliminary study. However, a total of 35 participants ranging from 18-35 years old will be recruited -  with the requirement that they have a valid driver’s license, have at least a year of experience driving, normal vision without correction, and have no other health issues that may affect driving – will be recruited from the University of Virginia (in this preliminary study the results of this participant are reported). The Internal Review Board (IRB) at University of Virginia has approved the requirements and the study (IRB# 20606: Cognitive Trust in Human-Autonomous Vehicle Interaction).

Warnings

In the visual-auditory conditions, the steering wheel color turned to one of three different colors (Fig.2). Green: when the system is on the autonomous mode and does not detect any hazardous object; red: as soon as the autonomous system detects a dangerous situation that might be out of the system limits, which alerts the participant to take over; and blue: if the participant has pressed both “on” buttons on the steering wheel, and switch to manual mode.  Five sec after returning to a steady state and not detecting any dangerous situation, the visual steering wheel automatically turned to green to allow the participant to switch back to the autonomous mode. At the time of hazard detection, the auditory warning consisted of a single for the generic auditory warning, the sound matched that used for the visual-auditory warning, with the addition of a high frequency feedback tone (750Hz, duration: 75ms) presented at the time phase changing.

Procedure

The driver was randomly assigned to the warning condition order. For both warning conditions, the participant was instructed on how and when to switch between the autonomous and manual modes. Participants were also told that warning condition indicated when the system detects a hazardous condition which exceeds its system limit. A 5-minute training session was carried out for the participant to become familiar with both the driving simulator and the driving task. Subsequently, the participant drove 16 experimental drives, which consisted of two weather conditions, divided into two blocks of eight drives. The participant was allowed a 5-minute break between each block. Each drive included a single weather condition and lasted for 3 minutes. Each trial composed of four hazardus objects or incidents, namely a pedestrian crossing the road, a cyclist and obstacle in the same lane as the vehicle, and a truck in the lane next to the vehicle. Within each trial, the incidents were randomly ordered. Therefore, as soon as the incident was detected and the system sent off the warning, the participant was given 5-8 seconds prior to the incident to take over (Eriksson, & Stanton, 2017) The participant was required to maneuver clear of the hazards by changing lanes or slowing down.

Experimental design

The experiment had a 2×2 repeated-measure factorial design with two within-subjects factors (warning modality and weather condition). Warning modality with two levels (visual-auditory and generic tones) and weather condition (rainy and sunny). It was hypothesized that visual-audio warning would promote proactive responses with improvement in physiological responses indicating greater cognitive engagement. 

Data Processing

Eye-tracker: Time spent-fixation, fixation duration, Time to First Fixation (TTFF-F) and pupil diameter were calculated using Tobii Pro Studio and iMotions. Analysis of AOI was performed to identify the moment participant’s gaze is fixed on the hazardous objects.

Electroencephalogram (EEG):

It’s been validated that EEG signal can be used for measuring task engagement and mental workload by considering benchmarking (Berka et al., 2007).  Barka et. al, showed the engagement can be classified in four levels of engagement index with four levels of: high engagement, low engagement and relaxed wakefulness/distraction and sleep onset. Note that drowsiness probability combined sleepiness onset and distraction. In order to obtain the metric benchmark (metric baseline) for later cognitive state algorithm for engagement and workload classification, each participant was required to perform practice session followed by three distinct tasks which took between 7-9 mins (vary for each participant) (Johnson et al, 2011). During the practice session, the instructions were given and the incorrect answer informed participant about his/her mistakes. Note: subjects who were unable to maintain within the normal range of the tasks did not continue the study without the penalty.

The benchmarking tasks are:

  1. Choice Vigilance task (3CVT)
  2. Auditory Psychomotor Vigilance Task (APVT)
  3. Visual Psychomotor Vigilance Task (VPVT)

A. 3CVT

In this task participants required to discriminate between three geometric shapes with chance of appearance 70% and  30% for primary and two secondary object respectively. Each stimuli presented in 200ms time interval. Participants were instructed to response to the stimuli and select the primary object as fast as possible only by pressing the spacebar. This task tends to optimize Engagement by requiring active vigilance and attention, as well as decision-making.

B. APVT

 It’s an auditory tone which is played every 2 seconds prompted the participant to tap in time with the noise for 5 min (Marcotte et.al., 2013). The subjects were required to close their eyes, to avoid any artifact, and  listen to the tone. They were asked to to tap the spacebar as soon as they heard the tone. This task aims to optimize distraction based on its sensitivity to failures in passive attention.

C.  VPVT

The visual psycho-vigilance task contained a red 10 cm circular target which appeared every 2sec for about 5min in the center of the computer monitor. This task was repeated every 2sec. The subject was asked to tap the spacebar in time with the target image. It basically analyzed the reaction time (RT) of the participant regarding their situation awareness and their level of alertness (Baulk et al., 2008; Dorrian et al. 2005). Baulk, (2007) particularly used this method and an interactive driving simulation for analyzing the simple task of reaction time by comparing performance under two conditions of increasing fatigue/sleepiness

Heart Rate (HR). Heart rate data were transferred from Shimmer device to the data acquisition system(iMotions) Inter-beat interval (IBI) data were preserved at 128 Hz resolution as well as the equivalent beat per minute (bpm) heart rate values. For each TOR, HRV parameters were computed in the baseline point of time and 5sec interval while receiving warnings. Then, the collected data were fed into Kubios for processing of the HRVparameter. Kubios is a widely used software developed by the Biosignal Analysis and Medical Imaging Group of the University of Kuopio, Finland for analysis of HRV (Tarvainen et al. 2014; Kubios 2017). This software allows the analysis of HRV and all the heart rate time-domain analysis over discrete time.

Galvanic Skin Responses (GSR):

Preprocessing GSR data were performed using MATLAB (The MathWorks, Massachusetts, USA). Data were filtered using low pass Butterworth filter with a cut-off frequency of 10HZz. Significant steering wheel events were detected when the derivative of steering wheel orientation signal was higher than the sum of mean signal plus two standard deviations (i.e., sudden change of direction). Thus, a 20-s data window with 5 s prestimulus and 10 s poststimulus insured a data set large enough to perform computation of the amplitude of the event-related EDR.

 

GSR

HR

 Weather

Number of peaks

Mean(SD)

RMSSD

Mean(SD)

SDNN

Mean(SD)

PNN50

Mean(SD)

NN50

Mean(SD)

Sunny

14.28 (0.45)

33.1(13.8)**

49.3(17.4)*

16.2(18.3)**

24.2(18.7)***

Rainy

15.6 (1.3)

23.2(9.8)

44.8(13.6)

8.3(10.8)

12.5(14.3)

Warning

         

Visual-Auditory

11.25 (2.88)**

28.6(14.5)

48.4(10.9)

13.0(15.7)

19.1(16.7)

Auditory

9.18 (1.24)

29.4(10.1)

49.1(18.3)

13.85(11.2)*

20.3(15.3)

RESULTS

Performance before and after receiving warnings in the two weather conditions and the impact of the associated cognitive demand on heart rate, skin conductance, elecrocepholography signals, and driving performance were analyzed. Table 1 presents mean and standard deviation values for skin conductance and heart rate (commonly used HRV for each TOR modality). A two-way ANOVA (weather condition and warning modalities) with 0.5 level of significance was computed on each of the physiological data, and EEG was examined as an exploratory factor.

Find Out How UKEssays.com Can Help You!

Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.

View our services

Eyetracker. In order to obtain better insight about the eye movement that falls within the visual cue, areas of interest (AOI) were defined over the boundaries inside the display (Fig.1). Analysis of paired t-test for the visual cue of the visual-auditory warning on some of the eye-tracker features are as follow: 1) the time spent on the visual cue in two weather conditions, sunny (Mean=895 ms, SD=491) and rainy (Mean= 1749 ms, SD=547) yielded a significant difference (t=3.29, dƒ= 5, p=0.001), 2) the average fixation duration on sunny weather (Mean=177.0, SD=24.9) and rainy weather (Mean=421, SD=169) revealed a significant difference (t=2.94, dƒ= 5, p=0.021).



EEG. Given the single-subject nature of this initial report, confirmatory analyses using Mass Univariate ERP approaches could not be conducted with this data set. However, visual inspection of average ERP patterns provides some indications of possible differences in TOR modality. Specifically, the time period between 200ms and 600ms was analyzed (Fig.3), given the presence of theoretically relevant ERPs in this time period. Visual inspection of the difference in ERP wave forms between auditory and visual-auditory TOR modalities (assessed based on local peaks in raw voltage and in global field power) indicated peak differences in the negative direction at approximately 250ms, 375ms, and 500ms. Peak differences in the positive direction were found at approximately 330ms and 400ms. These local peaks were then examined separately for the auditory TOR signals and the visual-auditory TOR signals to examine the likelihood that these differences occur at known, theoretically relevant ERPs. At 250ms, three areas of peak difference were seen to correspond to know ERPs; the P2(~250ms), N2(~330ms), and P3b(~375ms). In each case, the direction of peak difference indicated that ERPs were likely stronger in the visual-auditory TOR modality than in the auditory.

GSR. The number of peaks obtained from the GSR phasic data (frequency range:0.16HZ and above) as it linearly correlates to arousal which reflects both emotional and cognitive responses. The preliminary results are shown in Table 1.

HeartRate. Heart rate data were transferred from Shimmer device to the data acquisition system(iMotions) Inter-beat interval (IBI) data were preserved at 128 Hz resolution as well as the equivalent beat per minute (bpm) heart rate values. For each TOR, HRV parameters were computed in the baseline point of time and 5sec interval while receiving warnings.

ReactionTime. In order to analyze the effect of TOR modalities on reaction times in the two weather conditions a two-way ANOVA was carried out. The reaction time was calculated as the time difference between warning sets and control switch. The analysis showed neither TOR modality (F(1,15)=0.158 p=0.699) nor weather condition (F(1,15)=0.81, p=0.781) had a significant effect on the reaction time of the participant. Fig.4 shows the reaction time on two different weather conditions for each modality.

CONCLUSIONS

This study examined a set of neurophysiological responses and driving performances over the warning cues of two different modalities, in a high-fidelity driving simulator with the random occurrences of roadway hazards under varying weather conditions. It hypothesized color cues augmented with a single-tone hazard warning (i.e., visual-auditory modality) would be superior to auditory-only warning, possibly due to higher level of cognitive engagement and enhanced situation awareness. The results showed mixed results around this hypothesis; although no significant difference was observed in driving performance (mean reaction time), visual-auditory cues manifested enhanced physiological responses (in terms of GSR and HR-PNN50) as well as higher event-related potentials at close to 300 milli-seconds. This observation is not conclusive at this preliminary stage of data analysis. Despite no visible increase of driving performance, such modality-induced benefits in cognitive activation warrant further investigation. They can be exploit for the optimal design of warning in conditionally-automated vehicles.   

REFERENCES

  1. Banks, V. A., Plant, K. L., & Stanton, N. A. (2017). Driver error or designer error: Using the Perceptual Cycle Model to explore the circumstances surrounding the fatal Tesla crash on 7th May 2016. Safety Science, 108, 278–285.
  2. Bueno, M., Dogan, E., Selem, F. H., Monacelli, E., Boverie, S., & Guillaume, A. (2016, November). How different mental workload levels affect the take-over control after automated driving.19th International Conference on Intelligent Transportation Systems (pp. 2040-2045).
  3. Claybrook, J., & Kildare, S. (2018). Autonomous vehicles: No driver… no regulation? Science, 361(6397), 36-37.
  4. Damböck, D., Weißgerber, T., Kienle, M., & Bengler, K. (2013, October). Requirements for cooperative vehicle guidance. In 16th international IEEE conference on intelligent transportation systems (ITSC 2013) (pp. 1656-1661). IEEE.
  5. De Winter, J. C., Happee, R., Martens, M. H., & Stanton, N. A. (2014). Effects of adaptive cruise control and highly automated driving on workload and situation awareness: A review of the empirical evidence. Transportation research part F: traffic psychology and behaviour, 27, 196-217.
  6. Endsley, M. R., & Kiris, E. O. (1995). The out-of-the-loop performance problem and level of control in automation. Human Factors: The Journal of the Human Factors and Ergonomics Society, 37(2), 381–394. https://doi.org/10.1518/001872095779064555
  7. Endsley, M. R. (2017). Autonomous driving systems: A preliminary naturalistic study of the Tesla Model S. Journal of Cognitive Engineering and Decision Making, 11(3), 225–238.
  8. Eriksson, A., & Stanton, N. A. (2017). Takeover time in highly automated vehicles: noncritical transitions to and from manual control. Human factors, 59(4), 689-705.
  9. Eriksson, A., Banks, V. A., & Stanton, N. A. (2017). Transition to manual: comparing simulator with on-road control transitions. Accident Analysis & Prevention, 102, 227-234.
  10. Gold, C., Damböck, D., Lorenz, L., & Bengler, K. (2013). “‘Take over!’ How long does it take to get the driver back into the loop?” In Proceedings of the Human Factors and Ergonomics Society 57th Annual Meeting (pp. 1938–1942). Santa Monica, CA: Human Factors and Ergonomics Society
  11. Gold, C., Naujoks, F., Radlmayr, J., Bellem, H., & Jarosch, O. (2017). Testing scenarios for human factors research in level 3 automated vehicles. In International conference on applied human factors and ergonomics (pp. 551-559). Springer, Cham.
  12. Griggs, T., & Wakabayashi, D. (2018). How a self-driving Uber killed a pedestrian in Arizona. New York Times.
  13. Lanata, A., Valenza, G., Greco, A., Gentili, C., Bartolozzi, R., Bucchi, F., Bucchi, F., & Scilingo, E. P. (2014). How the autonomic nervous system and driving style change with incremental stressing conditions during simulated driving. IEEE Transactions on Intelligent Transportation Systems, 16, 1505–1517.
  14. Lee, J. D. (2018). Perspectives on automotive automation and autonomy. Journal of Cognitive Engineering and Decision Making, 12(1), 53–57.
  15. Levin, S., & Woolf, N. (2016). Tesla driver killed while using autopilot was watching Harry Potter, witness says. The Guardian, 1.
  16. Melcher, V., Rauh, S., Diederichs, F., Widlroither, H., & Bauer, W. (2015). Take-over requests for automated driving. Procedia Manufacturing, 3, 2867-2873.
  17. Paxion, J., Galy, E., & Berthelon, C. (2014). Mental workload and driving. Frontiers in psychology, 5, 1344.
  18. SAE On-Road Automated Vehicle Standards Committee. (2014). Taxonomy and definitions for terms related to on-road motor vehicle automated driving systems. SAE International.
  19. Strand, N., Nilsson, J., Karlsson, I. M., & Nilsson, L. (2014). Semi-automated versus highly automated driving in critical situations caused by automation failures. Transportation research part F: traffic psychology and behaviour, 27, 218-228.
  20. Winter, J. C., Happee, R., Martens, M. H., & Stanton, N. A. (2014, 11). Effects of adaptive cruise control and highly automated driving on workload and situation awareness: A review of the empirical evidence. Transportation Research Part F: Traffic Psychology and Behaviour, 27, 196-217.
  21. Young, M. S., & Stanton, N. A. (2002). Attention and automation: new perspectives on mental underload and performance. Theoretical issues in ergonomics science, 3(2), 178-194.
  22. Zeeb, K., Buchner, A., & Schrauf, M. (2015). What determines the take-over time? An integrated model approach of driver take- over after automated driving. Accident Analysis & Prevention, 78, 212-221.
  23. Tarvainen, M.P.; Niskanen, J.P.; Lipponen, J.A.; Ranta-aho, P.O.; Karjalainen, P.A. Kubios HRV—Heart rate variability analysis software. Comput. Methods Programs Biomed. 2014, 113, 210–220.
  24. Kubios—Kubios HRV | Heart Rate Variability Analysis Software. Available online: http://www.kubios.com/
  25. Dekker, S. W. (2002). Reconstructing human contributions to accidents: the new view on error and performance. Journal of Safety Research, 33(3), 371-385.
  26.  Bliss, J. P., & Acton, S. A. (2003). Alarm mistrust in automobiles: how collision alarm reliability affectsdriving. Applied ergonomics, 34(6), 499-509.
  27. Chuang, C.-H., C.-S. Huang, L.-W. Ko, and C.-T. Lin. An EEG-Based Perceptual Function Integration Network for Application to Drowsy Driving. Knowledge-Based Systems, Vol. 80, 2015, pp. 143–152
  28. Jap, B. T., S. Lal, P. Fischer, and E. Bekiaris. Using EEG Spectral Components to Assess Algorithms for Detecting Fatigue. Expert Systems with Applications, Vol. 36, 2009, pp. 2352–2359.

 

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

DMCA / Removal Request

If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please: