What is Artificial Intelligence (AI)?
Artificial intelligence (AI) involves the ability of machines to mimic human cognitive functions such as problem-solving, decision-making and learning.
AI is comprised of various fields of computer science, which aim to develop intelligent systems that can perform tasks normally requiring human intelligence. Although AI currently does not replicate the full spectrum of human thought, it is making a significant difference in areas such as image analysis, speech recognition and strategic game playing.
AI - A Double-edged Sword
Advantages of AI
AI automates tasks, streamlines processes and rapidly analyses data. It can significantly advance workplace safety and even complete numerous hazardous tasks. By analysing accident / near miss data, AI can uncover patterns and predict potential hazards. Proactive measures can then be put in place. In addition, AI-powered wearables can monitor employees’ vital signs and warn them of potential health and safety risks like fatigue or stress. AI undoubtedly contributes to scientific breakthroughs, improved efficiency and the ability for faster decision-making.
Disadvantages of AI
Disadvantages of AI include the absence of human judgement and empathy in decision-making, potential job displacement and ethical implications of AI bias based on training data. Surveillance through AI-powered cameras might highlight privacy issues. In addition, over-use of AI can lead to complacency and a decrease in human vigilance, which may negate other intended benefits such as safety.
The EU AI Act
The use of artificial intelligence in the EU will soon be regulated by the AI Act, the world’s first comprehensive AI legislation. As part of its digital strategy, the EU will regulate AI, to ensure transparency, fairness, and accountability, protecting fundamental rights like non-discrimination and privacy. The EU AI Act also aims to foster responsible development and deployment of AI, to promote trust and confidence in this transformative technology.
In April 2021, the European Commission proposed the first EU regulatory framework for AI. It requires that AI systems used in different applications are classified according to the risk they pose to users.
Unacceptable risk
Unacceptable risk AI systems will include cognitive behavioural manipulation of people or specific vulnerable groups e.g. voice-activated toys that encourage dangerous behaviour in children. Categorising people based on behaviour, socio-economic status or personal characteristics will be deemed unacceptable, as will biometric identification. Exceptions will be allowed for law enforcement purposes.
High risk
AI systems which negatively affect safety or fundamental rights will be considered high risk. They will be assessed before becoming available on the market as well as throughout their lifecycle. These systems include those used in toys, aviation, cars, medical devices and lifts. AI systems in specific areas that need to be registered in an EU database will also be categorised as high risk, for example law enforcement, education and migration control.
Limited Risk / General purpose
Limited risk AI systems such as those generating text, image, audio or video content, will need to comply with transparency requirements. Disclosing that the content was generated by AI will be required. After initial interaction, the user should be able to decide whether they want to continue using the specific AI system.
Navigating AI potential and addressing its challenges is crucial
AI holds immense potential to improve all our lives in the workplace and beyond, but its implementation requires consideration of legal, practical and ethical implications. Maintaining a balance between responsible use and technological advancement is important to ensuring a safer and more efficient future for everyone.