Traversing the minefield of employees’ rights and AI
JANICE GEEL and THE MB ELI H LE TSHABALALA ✼ Geel is an associate and Tshabalala is a candidate attorney at Werksmans Attorneys
Arena Holdings PTY
Business | Opinion
The meteoric rise of artificial intelligence (AI) is generating infinite possibilities, particularly so when combined with other innovations such as neurotechnology. Unesco describes “neurotechnology” as the use of any electronic device to read or modify the activity of neurons — the specialised cells that transmit nerve impulses throughout the body. Electronic devices typically used in neurotechnology include invasive technology, such as placing microelectrodes directly onto the brain, or non-invasive technology such as magnetic resonance imaging to map brain activity to identify brain tumours and developmental problems. It also includes wearables such as smartwatches and virtual reality headsets to monitor the user’s heart rate, which, when combined with AI, can lead to a collection of neurological, physiological and cognitive information that can be used to interpret personal information. So what does this mean for the user’s mental privacy and human rights? Unesco states that neurotechnology has immense potential to improve learning and cognition, facilitated by voice-to-text creation and virtual and augmented reality supported by brain control, to name a few. But it warns that these advancements present a novel ethical dilemma: the need to introduce specific human rights to prevent impairments to the user’s cognitive liberties and mental privacy. It adds that advances in neurotechnological applications must consider the potential consequences for the user’s autonomy, privacy, consent and dignity, as users may not always be aware that their neurological information is being processed and used in conjunction with AI to make inferences on the user’s behaviour, emotions, cognitive abilities and productivity, and to predict the user’s decisions. Some companies, from as early as 2014, have integrated the use of neurotechnology in the workplace, and it is unclear whether their employees were aware that their employers were monitoring, among other data, their neurological information. These companies have made and continue to use some of the following neurotechnologies: ● Wireless sensors in the hats worn by workers on production lines to conduct emotional surveillance, which is combined with AI algorithms that aim to detect workplace rage, anxiety and sadness; and ● SmartCaps to monitor employees’ brainwaves for fatigue and relay the neurological information to the employee in real time to potentially prevent injuries. Employers may use the neurological information gleaned from employees for other purposes, such as to aid their decisions on promotions, retrenchments and dismissals — and also to differentiate and discriminate against applicants. The application of neurotechnology at work may therefore be problematic given the South African labour legislative framework. Section 7 of the Employment Equity Act (EEA), prohibits medical testing by employers unless legislation permits and/or requires the test to be conducted or unless it’s justifiable in light of medical facts or the employee’s employment conditions, among other factors. In addition, section 8 of the EEA prohibits psychological testing unless the test is scientifically shown to be valid and reliable, is applied fairly to all employees and certified by the Health Professions Council of SA or another regulatory body. In workplaces governed by this legislation, depending on the health and safety risks at a specific workplace, employers are permitted to conduct breathalyser tests. Parallels could be drawn between breathalyser tests and the tests conducted using AI and neurotechnologies which measure rage, anxiety, sadness and fatigue, and consequently these types of tests are more likely to constitute medical tests than psychological tests. In circumstances in which an employer cannot rely on legislation to justify the use of a neurotechnology or AI medical test, the employer could nevertheless conduct such medical tests where they are justifiable in light of medical facts, employment conditions, social policy or the inherent requirements of the job. However, other than in respect of the inherent requirements of a job, justifiable criteria in terms of the EEA remain largely untested by the labour court. Consequently, an employer who relies on these largely untested justifications to validate the use of AI tests will not be in a position to rely on the support of established authorities. But assuming an employer can justify the use of the test in terms of the EEA, they would need to ensure the test conducted measures only what the employer requires and does not extend beyond this scope by collecting information the employee or applicant has either not consented to or for which there is no work-related justification. The employer would also need to ensure the data acquired are applied consistently to all staff and do not discriminate against any groups of employees. An additional challenge is the processing of personal information that would invoke the Protection of Personal Information Act, as specific requirements need to be complied with. Given the speed with which neurotechnology and AI are taking over the daily activities of personal and work life, the South African legal landscape will need to quickly evolve to reform and keep up with social and technological changes.