We use cookies and similar technologies that are necessary to operate the website. Additional cookies are used to perform analysis of website usage. By continuing to use our website, you consent to our use of cookies. For more information, please read our Cookies Policy.

Closing this modal default settings will be saved.

Unmasking AI Paradox: From Friend to Foe, Navigating the Deep Waters of Deep Fakes

Owner's Profile

Judith Mariya

Published on February 10, 2024, 10:32:29

102

deep fake, ai , artifical intelligence, technology

As capabilities of artificial intelligence (AI) continue to advance rapidly, concerns and potential threats associated with its deployment too are mounting, necessitating powerful regulations to ward off misuse or unintended consequences stemming from AI technologies.

“By implementing robust frameworks, governments and organisations can mitigate these risks, ensuring that AI is developed and utilised in a responsible and ethical manner,” a senior legal expert said.

According to Shulka Chavan, Legal Associate at Dubai-based NYK Law Firm, “regulations are intended not to stifle innovation but to safeguard against the negative impacts of unchecked AI development, fostering sustainable advancement of this transformative technology”.
Commenting on the recent controversy surrounding the spread of AI-generated images of celebrity singer Taylor Swift, she said that “the issue highlights the need for comprehensive regulations in artificial intelligence. As technology progresses, the risk of misuse becomes more prominent, raising concerns about privacy, intellectual property and the weaponisation of AI-generated content,” she added.

“Adapting legal frameworks to keep pace with technology is crucial for balancing innovation and responsible AI usage, protecting celebrities from exploitation, and ensuring public confidence in ethical AI deployment, she continued.

 Friend and Foe

The year 2023 has been the year of artificial intelligence. From helping you in your daily tasks to getting your dream job, AI has become an integral part of life. But what’s shocking is that this friend can become your foe at times, who can push you into a deep pit.

Recently deep fake images of singer Taylor Swift were released all over the Internet causing havoc among the public. These images were so realistic that it was difficult to differentiate between the original. But what frightens the public even more is that there is no proper law to regulate it.

The Big Problem

As women are mostly the victims of these creations, it is necessary to take immediate action to prevent the repercussions associated with it. In most cases, when the video is published it is being widely circulated without the consent of the person.

"One of the most disturbing trends I see on forums of people making this content is that they think it's a joke or they don't think it's serious because the results aren't hyper-realistic, not understanding that for victims, this is still really, really painful and traumatic," Euronews.next said, quoting Henry Ajder, an expert on generative AI.

According to studies, these crimes go undetected as there are no proper rules regulating AI and there isn’t much software in the market to detect it.

While talking about the guidelines and rights of individuals, Chavan said: “Clear guidelines and ethical standards are crucial to safeguard individual rights, prevent unauthorised content creation and preserve intellectual property.

“To combat the alarming trend of women being targeted as victims, it is crucial to acknowledge and act. This involves being aware of warning signs and responding promptly to discrimination or harassment. To protect women, it is necessary to identify these red flags and address them through legal measures, education and a supportive societal environment,” she added.

Deep Fake Detectors

Recently, new applications have come up, capable of detecting deep fake videos.
A team from the American University of Sharjah (AUS) has achieved remarkable success with their deep fake detection application named ‘Fake It’ at the esteemed Arab IoT & AI Challenge 2023. The application secured the second position in a tough competition, surpassing entries from over 1,000 participants representing 13 countries.

This innovative application utilises a sophisticated deep neural network trained on a comprehensive online dataset for enhanced scalability. It meticulously analyses facial features within videos and generates a ‘fakeness’ score, aiding in the identification of authentic or counterfeit videos. Notably, ‘Fake It’ stands out by its ability to detect deep fakes tailored specifically for the diverse population of the UAE.

For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004Follow The Law Reporters on WhatsApp Channels.

Comments