Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
IT is said that we stand at the cusp of the “fourth industrial revolution”. The emergence and proliferation of artificial intelligence (AI) and machine-based learning is set to usher in a new era of productivity, efficiency and governance across public and private sectors.
In recognition of such potential, the Ministry of Information Technology and Telecommunication laid out a draft National Artificial Intelligence Policy last year, which reportedly was to be presented before the cabinet this month. The policy sets out ambitious targets for the use, scaling and proliferation of AI, while recognising — though almost in passing — the need to ensure its ethical and responsible use, which upholds the fundamental rights and privacy of users. Given the state’s existing performance on cyber safety, particularly of women, these assurances appear unrealistic.
AI is a broad field that encompasses the development of computer systems capable of simulating human learning, comprehension, problem-solving, decision-making and creativity. It employs tools such as machine learning and deep learning to analyse data (including text and images) to identify patterns, make predictions and generate human-like speech.
ChatGPT may be the most commonly encountered/used generative AI tool. Generative AI generates texts and images in response to commands or prompts submitted by a user. The technology has been put to beneficial use in business settings to, for instance, improve efficiency through automation of simple tasks, synthesise and analyse data to enhance business decision-making, and assist with creative undertakings.
However, there is a flip side. Deepfake technology — a form of generative AI — can create realistic but entirely fake images and videos of persons by analysing existing audiovisual data.
Many of us have come across doctored video clips of prominent personalities online — some comical and others more pernicious in intent and nature. In fact, the latter are predominant on the web: 98 per cent of deepfake videos are pornographic in nature; 99pc target women or girls. These statistics are not surprising. Cases involving deepfake videos (often sexual in nature) targeting women journalists and women politicians have made headlines in Pakistan in the recent past.
Technology-facilitated gender-based violence (TFGBV), such as deepfake content, also manifests as image-based abuse and blackmail, misinformation/ defamation, impersonation, cyberstalking and violent threats. While men, too, are subject to cyber violence, it is a gendered phenomenon worldwide. Women are the largest victims of online harassment in Pakistan. According to a Pew Research Centre study, while 26pc of women aged 1824 years experience cyberstalking, only 7pc of men in the same age range do so. Cyberspace is a more unsafe place for women across the world.
AI has now altered the arena where TFGBV plays out, greatly enhancing the apparent authenticity and believability of misinformation and fake news propagated on the internet. This is not just on account of deepfake technology. Generative AI can be used not only to create cyber-harassment templates, but also to generate and modify false, yet convincing personal histories of women, perpetuating the cycle of misinformation and fake news.
TFGBV violates women’s right to dignity, privacy and non-discrimination. Many times, it culminates in physical violence. Issues of consent and intellectual property rights also arise with the (often) non-consensual use of copyrighted data by AI technologies — a matter that gained prominence last year in the Hollywood strike against the unlicensed use of actors’ AI replicas by motion picture studios.
AI regulation is an evolving field. Given the risks that have arisen with the increased deployment of and access to AI technology, there have been efforts worldwide to regulate its use.
Taking the lead in a human rights-centric approach to AI regulation, the EU Artificial Intelligence Act, 2024, positions safety and compatibility with fundamental rights and freedoms as the guiding principles of AI regulation. It bars the use of AI for biometric surveillance and compilation of facial recognition databases (Article 5) and provides that where video, audio or image content is created with deepfake technologies, disclosure regarding such artificial generation/ manipulation be provided (Article 52).
Unesco’s Recommendations on the Ethics of Artificial Intelligence Use (2023), also stipulate the protection of human rights and freedoms as the first guiding “value” in AI regulation. The US Blueprint for an AI Bill of Rights, a white paper published by the White House, articulates certain principles for the protection of civil rights and democratic values in the building, deployment, and governance of automated systems.
Pakistan’s AI policy also recognises the particular dangers of AI to create “fake content such as text, images and videos”, and envisions that the AI Regulatory Directorate (ARD) will issue guidelines to address “possible spread of disinformation, data privacy breaches and fake news”. The exact mechanism of such regulation may be more minutely spelt out in any AI legislation that is eventually passed.
For now, the existing mechanism, under the Cybercrime Wing of the FIA, has largely been ineffective in addressing the countless complaints of TFGBV made by women under the Prevention of Electronic Crimes Act, 2016, which criminalises the transmission of false and defamatory information through an electronic system, distortion of a person’s pictures to show her/ him in a sexually explicit position, and cyberstalking. However, as scathingly observed by the Sindh High Court last year, the Cybercrime Wing does not have the “competency to effectively investigate cybercrime, let alone combating th[ese] [offences]”.
Blocking the flow of information and traffic on the internet will not serve as a solution. The state must ensure that any future regulation of AI-led TFGBV — to be laid out by the ARD or enforced by the newly formed National Cybercrime Investigation Agency — is effective and upholds the standards of ethics and human rights that its AI policy espouses.
The writer is a lawyer.
Published in Dawn, August 30th, 2024