The Oxford word of the year, “Brain Rot,” reflects a growing global concern about the deteriorating intellectual and mental health caused by the overconsumption of trivial online content. Defined as "the supposed deterioration of a person’s mental or intellectual state, especially due to overexposure to trivial online material," the term encapsulates the challenges of the digital age. From a legal perspective, this phenomenon raises critical questions about social media regulation, content moderation, and user protections across jurisdictions.
The Legal Implications of Social Media Overuse
As social media platforms play an increasingly central role in our lives, governments worldwide are grappling with how to regulate these platforms without stifling innovation or infringing on free speech. The concept of "Brain Rot" underscores the urgency of addressing the unchecked spread of trivial and harmful content online, which can lead to significant societal and individual repercussions.
Key legal and regulatory challenges include:
Content Moderation and Liability:
Social media platforms often rely on algorithms to prioritize engagement over meaningful content, amplifying trivial or harmful material.
In the United States, Section 230 of the Communications Decency Act provides platforms immunity from liability for user-generated content but has faced criticism for allowing the spread of harmful material.
The European Union’s Digital Services Act (DSA), by contrast, imposes stricter accountability, requiring platforms to remove illegal content and assess the systemic risks of their algorithms.
Protection of Vulnerable Users:
Children and adolescents are particularly susceptible to the effects of "Brain Rot," as trivial or harmful content can impair cognitive development and mental health.
The UK’s Online Safety Bill aims to hold platforms accountable for protecting underage users by enforcing age verification and restricting access to harmful content.
In Australia, the Online Safety Act empowers the eSafety Commissioner to mandate the removal of harmful material, particularly targeting content affecting minors.
Data Privacy and Manipulation:
The overconsumption of trivial content is often fueled by targeted algorithms that exploit user data to maximize engagement.
The General Data Protection Regulation (GDPR) in the EU enforces strict controls over data collection and user profiling, offering users greater transparency and control.
Similar measures are emerging globally, such as India’s Digital Personal Data Protection Act, which aims to curb the misuse of personal data for manipulative content delivery.
Mental Health and Legal Accountability:
The rise of digital overconsumption has led to increased mental health issues, including anxiety, depression, and cognitive decline. While legal accountability remains limited, some jurisdictions are taking action:
In France, the government has introduced laws requiring platforms to warn users about the dangers of excessive screen time.
In South Korea, the government promotes “digital detox” campaigns and provides mental health resources for those affected by excessive online usage.
International Efforts to Combat "Brain Rot"
Addressing the global impact of digital overconsumption requires international collaboration. Initiatives such as the UNESCO Guidelines on Regulating Digital Platforms aim to harmonize efforts to combat misinformation, promote digital literacy, and foster a healthier online environment.
Furthermore, the G20 Digital Economy Working Group has emphasized the need for ethical AI usage and algorithmic transparency to mitigate the harmful effects of trivial or manipulative content.
Balancing Freedom of Expression and Regulation
A major challenge in addressing "Brain Rot" is balancing the right to free expression with the need for content regulation. Overregulation could suppress creativity and legitimate discourse, while under regulation risks allowing harmful content to proliferate unchecked.
The First Amendment in the U.S. protects free speech but limits the government’s ability to regulate content, creating a reliance on platform self-regulation.
Conversely, Germany’s Network Enforcement Act (NetzDG) mandates swift action against illegal content, demonstrating a more proactive approach.
Conclusion
The rise of "Brain Rot" is a wake-up call for governments, social media platforms, and society at large. While legal frameworks around the world are evolving to address the challenges of digital overconsumption, more cohesive and comprehensive strategies are needed to protect users from the intellectual and mental decline associated with trivial online content.
Regulations must focus on promoting algorithmic transparency, protecting vulnerable users, and fostering digital literacy, all while respecting fundamental freedoms. By tackling these challenges, the global community can ensure that the digital age enriches rather than diminishes our collective intellect and well-being.
For any enquiries or information, contact info@thelawreporters.com or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels
In a significant development, whistleblowers from OpenAI have filed a formal complaint with the U.S. Securities and Exchange Commission (SEC), alleging that the AI company employs restrictive non-disclosure agreements (NDAs) that could suppress employees' rights and violate U.S. securities laws. The move comes amid growing concerns over transparency, ethical conduct, and whistleblower protections in the rapidly evolving artificial intelligence industry.
According to sources familiar with the case, the whistleblowers claim OpenAI’s NDAs are overly stringent and may discourage employees from reporting misconduct or raising concerns about corporate practices, including issues that could be of material interest to regulators, shareholders, and the public.
The complaint argues that OpenAI's agreements violate whistleblower protection rules under the Dodd-Frank Act and SEC guidelines. These regulations explicitly protect individuals who report potential wrongdoing, ensuring they are not subjected to retaliation or silencing through contractual obligations.
Legal experts suggest that if the allegations are substantiated, OpenAI could face scrutiny over whether it effectively obstructed the whistleblowing process, potentially undermining regulatory oversight.
OpenAI, the prominent artificial intelligence research company known for pioneering tools such as GPT-4, ChatGPT, and other advanced models, has been at the center of multiple discussions regarding AI ethics, safety, and corporate governance. The whistleblowers allege that OpenAI’s NDAs limit former and current employees’ ability to disclose concerns about:
Potential risks associated with AI technologies, including safety issues.
Ethical concerns around AI development and deployment.
Financial and operational transparency.
The SEC complaint argues that employees face severe consequences, such as legal action or financial penalties, for speaking out or sharing information with external parties, even when such disclosures pertain to regulatory concerns.
Under the Dodd-Frank Wall Street Reform and Consumer Protection Act, companies cannot enforce agreements that interfere with whistleblower protections. Section 21F of the Securities Exchange Act explicitly empowers employees to communicate directly with the SEC about potential violations, even if bound by non-disclosure or confidentiality agreements.
Attorney statements indicate that NDAs deemed too restrictive may violate Rule 21F-17, which prohibits companies from impeding whistleblowers' ability to report misconduct to the SEC. Companies found guilty of such practices can face fines, penalties, and reputational damage.
A whistleblower representative familiar with the complaint emphasized the need for accountability:
OpenAI has a responsibility to foster an environment of ethical transparency, especially given the impact of its AI technologies. Employees must feel free to report potential misconduct or ethical risks without fear of retaliation or legal repercussions. These NDAs may create a chilling effect, discouraging disclosures that are critical for public and investor interest."
The complaint underscores a broader concern about tech companies suppressing dissent and criticism through contractual agreements, particularly in industries with profound societal and economic impacts.
OpenAI has not yet issued a formal response to the allegations. However, the company has historically emphasized its commitment to ethical AI development and transparency. In previous instances, OpenAI leadership has acknowledged the importance of accountability in the AI sector, given its global implications.
The SEC, meanwhile, has been actively investigating the use of restrictive NDAs in corporate America. In recent years, companies in industries ranging from technology to finance have faced regulatory scrutiny for employing contracts that impede employees' ability to report concerns to authorities.
The whistleblower complaint comes at a time of heightened scrutiny for the tech industry as regulators, policymakers, and the public demand greater accountability. Artificial intelligence, in particular, has raised significant concerns about safety, fairness, and transparency.
Recent developments, including OpenAI’s internal challenges and leadership changes, have amplified calls for AI companies to prioritize ethical standards and openness. The whistleblowers argue that restrictive NDAs directly conflict with these principles, potentially obscuring issues that warrant public attention and regulatory oversight.
If the SEC determines that OpenAI’s NDAs violate whistleblower protection laws, the company could face significant penalties, including fines and mandated reforms to its contractual practices. The investigation may also set a precedent for other tech companies, reinforcing the importance of whistleblower protections across industries.
For OpenAI, this development raises questions about its governance practices, transparency commitments, and its role as a leader in shaping ethical AI.
Balaji’s death has reignited questions surrounding workplace pressure, whistleblower protections, and ethical concerns in the tech sector. Friends and colleagues described him as a dedicated and talented individual, deeply invested in ensuring AI development adhered to ethical principles and transparency.
The San Francisco police are investigating the cause of Balaji’s death, though no foul play has been suspected so far.
The whistleblower complaint, originally filed by Balaji and his colleagues, prompted the SEC to review OpenAI’s policies regarding whistleblower protections and restrictive NDAs. While OpenAI has publicly maintained its commitment to ethical conduct, the investigation remains ongoing. The SEC is assessing whether the company’s agreements violated federal whistleblower laws and interfered with employees’ rights to report misconduct or ethical breaches.
Balaji’s case underscores the immense pressures faced by employees in the tech industry, particularly in companies developing transformative technologies like artificial intelligence. His tragic passing has led to renewed calls for:
Stronger whistleblower protections to ensure employees can speak up without fear of retaliation or undue pressure.
Increased oversight of workplace culture and employee well-being in major tech companies.
Greater transparency around ethical and safety practices in AI development.
Balaji’s death serves as a sombre reminder of the challenges whistleblowers face when advocating for accountability within their organizations.
Conclusion
The whistleblower complaint against OpenAI highlights the tension between corporate confidentiality and employees’ rights to report wrongdoing. As regulators take a closer look at these allegations, the outcome could have far-reaching implications for OpenAI and the broader tech industry.
With AI continuing to transform industries and societies, the balance between innovation, transparency, and accountability remains more critical than ever.
For any enquiries or information, contact info@thelawreporters.com or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels
The UAE Cybersecurity Council has issued a security advisory urging Google Chrome users to update their browsers immediately to protect against multiple vulnerabilities.
The Council recommends installing the latest security updates and sharing this critical information with subsidiaries and partners to ensure comprehensive protection.
In a statement, Google confirmed the release of a security update addressing a serious vulnerability that could allow attackers to execute remote code on affected systems or access sensitive data.
Google announced that the Chrome stable channel has been updated to 131.0.6778.139/.140 for Windows and Mac and 131.0.6778.139 for Linux. The updates will be rolled out gradually over the coming days and weeks.
Access to detailed bug reports and links will remain restricted until a significant number of users have applied the update. Google also noted that restrictions might continue if the vulnerabilities are tied to third-party libraries used by other projects that have not yet implemented a fix.
The UAE Cybersecurity Council reiterated the urgency of applying these updates to safeguard systems and advised users to stay vigilant about browser security.
For more details on the vulnerabilities, users can refer to the official Cybersecurity Council advisory or Google’s blog announcement.
For any enquiries or information, contact info@thelawreporters.com or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels
The UAE education system is undergoing a significant shift as the Emirates Standardized Test (EmSAT) for high school graduates is being cancelled. This move, announced on November 3rd, 2024, eliminates the standardized test requirement for admission to government universities in the country.
The decision, approved by the Education, Human Development and Community Development Council, reflects a revised approach to university admissions. The Ministry of Education and the Ministry of Higher Education and Scientific Research have jointly announced the cancellation and implementation of new criteria.
Universities Gain Flexibility
Universities will now have more autonomy in setting their own admission criteria. This allows them to tailor their selection process to specific programs and identify students who possess the necessary skills and strengths for success in each field.
Science Subjects Take Center Stage
For medical and engineering programs specifically, the focus will shift towards a student's performance in science subjects. Admission decisions will prioritize these subject grades over the overall percentage score achieved in high school graduation. This targeted approach aims to ensure that students with a strong foundation in science are well-positioned to excel in these demanding disciplines.
A Modernized Admissions Landscape
The cancellation of the EmSAT and the emphasis on subject-specific excellence mark a step towards a more individualized approach to university admissions in the UAE. This shift empowers universities to create diverse student bodies and fosters a learning environment that caters to students' strengths and aspirations.
Unforeseen Impacts
While the long-term effects of this policy change remain to be seen, it is anticipated that universities will implement a variety of measures in their revised admission criteria. These may include increased emphasis on high school transcripts, standardized tests specific to certain disciplines, and potentially even portfolio reviews or entrance interviews.
The move signals a commitment to a more holistic evaluation of student potential, potentially leading to a more diversified and well-rounded student body within UAE universities.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels
News Corp's Dow Jones and New York Post have filed a lawsuit against AI start-up Perplexity, accusing the company of "massive illegal copying" of copyrighted content.
The legal action, filed on Monday, alleges that Perplexity has been using copyrighted content from News Corp publications, including The Wall Street Journal and the New York Post, to train its AI models and generate search results. This practice, according to the lawsuit, infringes on the companies' intellectual property rights and undermines their business models.
Perplexity, an AI-powered search engine, provides users with concise and informative answers to their queries, often citing sources to support its responses. However, the lawsuit alleges that the company has been using copyrighted content without proper authorization to train its AI models.
This legal battle highlights the growing tension between traditional media companies and AI startups. As AI technology advances, concerns about copyright infringement and fair use are becoming increasingly prominent. The outcome of this case could have significant implications for the future of AI and the media industry.
Perplexity has responded to the lawsuit, denying the allegations and asserting that it respects copyright laws. The company maintains that it uses a combination of techniques to generate responses, including accessing and processing publicly available information.
The legal dispute between News Corp and Perplexity is likely to be closely watched by industry observers. It raises important questions about the boundaries of fair use, the value of copyrighted content in the age of AI, and the potential liability of AI companies that use copyrighted material without proper authorization.
The digital age has transformed the entertainment and media industry in unprecedented ways, fundamentally altering the way content is created, distributed, and consumed. While digital disruption offers vast opportunities for innovation, it also presents unique legal and business challenges. From navigating intellectual property rights in a digital landscape to addressing issues of data privacy, cybersecurity, and regulatory compliance, entertainment and media companies must proactively adapt to safeguard their assets and uphold compliance standards. This article delves into the most pressing business and legal concerns facing the entertainment and media sector in today's digital era.
1. Intellectual Property Rights and Content Piracy
The proliferation of digital content has made it easier than ever for users to access media, but it has also amplified the risk of intellectual property (IP) infringement and content piracy. Content, from music and movies to digital art, can be copied and shared without authorization, affecting revenue for creators and media companies alike. In response:
2. Data Privacy and Cybersecurity Concerns
With digital entertainment services gathering vast amounts of consumer data to personalize user experiences, issues surrounding data privacy and cybersecurity have become paramount. Companies in the media and entertainment sector must address:
3. The Evolving Regulatory Landscape
The regulatory environment for entertainment and media is evolving rapidly to address issues that are unique to digital content. Media companies face several regulatory hurdles, including:
4. Contracts and Licensing in a Digital World
In the entertainment industry, traditional licensing models are being disrupted as digital platforms seek global distribution rights for content. This shift has introduced complexities in contract structuring and royalty distribution:
5. Monetization and Emerging Technologies
As audiences shift to digital platforms, the entertainment industry must explore innovative monetization models while navigating the legal and business challenges these models entail:
6. Content Diversity and Inclusion
Digital platforms provide an opportunity for creators from diverse backgrounds to share their work with a global audience. However, companies in the entertainment industry must address:
Conclusion
The entertainment and media industry stands at the crossroads of opportunity and challenge in the digital age. While digital transformation enables more dynamic and innovative content delivery, it also demands heightened vigilance in protecting intellectual property, ensuring data privacy, navigating evolving regulations, and securing equitable compensation for creators. As companies continue to adapt, establishing robust legal frameworks and business practices will be essential to sustaining growth and fostering a more inclusive, secure, and legally compliant entertainment landscape.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels
As the UAE continues to prioritize road safety and adapt to rapid advancements in transportation, a new traffic law has been introduced, setting stricter penalties and new regulations for motorists and pedestrians alike. This law replaces the previous traffic law and accommodates changes in vehicle technology, including electric and self-driving vehicles. Here’s a comprehensive look at the key changes and penalties under this new legislation.
Key Changes in the UAE’s Traffic Law
1. Hit-and-Run Penalties: Up to Dh100,000 Fine and Two Years of Jail Time
One of the most significant updates in the new traffic law pertains to hit-and-run cases. Drivers involved in a hit-and-run incident that results in injury face stricter penalties. The law stipulates:
The new law aims to ensure accountability, encouraging drivers to assist injured parties and report accidents immediately.
2. Stricter Penalties for Jaywalking
The new law also places increased responsibility on pedestrians to follow road safety rules. Jaywalking or crossing roads outside designated pedestrian crossings can result in fines or other penalties. These changes reflect the UAE’s commitment to pedestrian safety and are in line with the government’s goal to reduce pedestrian accidents.
3. Lower Minimum Driving Age
In an effort to expand mobility options for young people, the new law has lowered the minimum age required for driving. While specifics on the age adjustment have not been publicly confirmed, the change aims to provide younger individuals with more flexibility in terms of commuting and transportation.
4. Regulations for Self-Driving and Electric Vehicles
In a nod to the evolving transportation landscape, the law now includes provisions for electric and autonomous vehicles. This makes the UAE one of the leading countries to incorporate such considerations into its legal framework. Specific guidelines for self-driving vehicles, including rules for operation and maintenance, are expected to ensure the safety of all road users as these technologies become more prevalent.
5. Enhanced Rules for Cyclists and E-Scooter Riders
The law also addresses the increased use of bicycles and e-scooters on UAE roads. New rules include:
These updates are in line with the UAE’s commitment to supporting eco-friendly transportation options while maintaining road safety.
6. Comprehensive Road Safety Measures for Pedestrians and Motorists
The new law imposes additional responsibilities on both drivers and pedestrians to prevent road incidents. Drivers are now required to exercise heightened vigilance in areas with heavy pedestrian traffic. Conversely, pedestrians must adhere to designated crossing areas and avoid actions that could disrupt traffic flow or compromise their own safety.
Applying the New Law: What Motorists and Pedestrians Should Know
The UAE government’s official social media post on X (formerly Twitter) outlines that the new law aims to keep up with transportation advancements while ensuring safety. This is particularly relevant as the UAE pushes to become a leader in smart city technology and sustainable transport. For residents and visitors, adhering to these regulations will be crucial, as penalties for violations are set to become more stringent.
Penalties and Enforcement
The new traffic law is backed by an updated enforcement framework designed to deter violations and enhance public safety. Some key penalties include:
In addition to these penalties, law enforcement will use enhanced surveillance, including road cameras and AI-based monitoring, to ensure compliance.
Emphasis on Road Safety Education
The UAE’s traffic authority has also outlined plans to launch extensive public awareness campaigns to educate residents on the new law. The campaigns will emphasize the importance of safety for all road users, the responsibilities of pedestrians, and the need for motorists to comply with the latest regulations. Special training and informational resources may be available for younger drivers, e-scooter riders, and cyclists to reinforce safe practices.
How the New Traffic Law Supports the UAE’s Vision
The UAE’s commitment to modernizing its traffic laws aligns with the nation’s vision for a safer, more sustainable future. By incorporating rules for electric and autonomous vehicles and ensuring safety measures for alternative modes of transport, the law supports the UAE’s goals to reduce carbon emissions and traffic-related injuries. Furthermore, it positions the UAE as a global leader in adopting transportation solutions that meet the demands of modern urban life.
Final Thoughts
As the UAE’s new traffic law comes into effect, motorists, pedestrians, and cyclists are encouraged to familiarize themselves with the updated regulations. This comprehensive approach to road safety reflects the UAE’s dedication to ensuring a secure and progressive environment for all. Residents and visitors are advised to keep track of any official announcements and ensure they follow these new guidelines to avoid penalties and contribute to safer roads.
For more information on the new law or updates, individuals can refer to the UAE government’s official social media channels or visit the local traffic authority’s website for complete details.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels
News Corp, the media giant behind publications such as The Wall Street Journal and the New York Post, has filed a lawsuit against Perplexity AI, accusing the startup of copyright infringement. The legal action centers on allegations that Perplexity AI is unlawfully using content from News Corp’s publications without proper authorization, effectively stealing both intellectual property and revenue.
The lawsuit claims that Perplexity AI, an AI-powered search engine and content aggregator, has been scraping and reproducing articles from News Corp titles to provide answers to user queries. This practice, News Corp argues, violates copyright protections and undermines the revenue models of the affected publications. By offering snippets of content and answers derived from copyrighted material, Perplexity AI is allegedly diverting traffic away from News Corp’s websites, which rely heavily on subscription fees and advertising revenue.
This case highlights the tension between traditional media companies and emerging AI technologies, particularly in the realm of content aggregation and dissemination. Media companies have long been concerned about how AI tools like chatbots and search engines could bypass paywalls and licensing agreements, thus diminishing the value of their content.
News Corp’s lawsuit against Perplexity AI is part of a broader trend where major media organizations are taking legal action against AI companies for copyright infringement. As AI becomes increasingly integrated into everyday internet use, content creators and publishers are grappling with the challenge of protecting their intellectual property in an evolving digital landscape.
If News Corp succeeds in its lawsuit, it could set a significant precedent for how AI tools interact with copyrighted content, potentially leading to stricter regulations on content scraping and increased accountability for AI-driven platforms. This case underscores the ongoing battle over control of digital content and the balance between innovation and intellectual property rights in the age of artificial intelligence.
Perplexity AI has yet to issue a formal response to the lawsuit, but the case will likely have far-reaching implications for both the media industry and AI startups.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels
The UAE Ministry of Justice has announced a groundbreaking project—the virtual lawyer—aimed at streamlining legal proceedings, particularly in simple cases. Set to be the first of its kind in the UAE and the region, this initiative will enhance the speed and efficiency of litigation processes while improving the overall experience for litigants.
Key Features and Launch Details:
The project will operate using the Unified National Legislative Texts Database, developed by the Ministry of Justice. Law firms interested in utilizing the system will need to register and contribute to the database.
Impact on the Justice System:
The virtual lawyer is part of the UAE's broader efforts to modernize the judicial system and embrace artificial intelligence (AI). By integrating advanced technology, the project is expected to:
This initiative is part of the “Emirates Future Mission” and aligns with the UAE’s vision to create proactive government models that are future-ready. The project is being developed in partnership with the Office of Government Development and the Future and the Office of Artificial Intelligence, Digital Economy, and Remote Work Applications.
Government and Industry Support:
Abdullah Sultan bin Awad Al Nuaimi, UAE Minister of Justice, emphasized that this project opens new possibilities for the judicial system, enabling greater efficiency in legal procedures. Similarly, Ohood bint Khalfan Al Roumi, Minister of State for Government Development and the Future, highlighted the role of the virtual lawyer in transforming government services through AI.
The project is also supported by Omar Sultan Al Olama, Minister of State for Artificial Intelligence, Digital Economy, and Remote Work Applications, who stressed the importance of incorporating AI solutions in government work.
Ensuring Data Privacy:
The virtual lawyer will operate within the UAE government’s cloud environment, ensuring cybersecurity and the protection of client data. The Ministry is also working on drafting legislation to regulate new legal professions and ensure compliance with the highest digital security standards.
This initiative represents a significant step forward in the UAE’s mission to embrace AI and digital transformation, with the goal of reshaping the future of legal and government services.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels
If you're a resident of Sharjah, you're likely familiar with the 'Digital Sharjah' app. But now, with its newly launched version, the app has expanded to allow residents to manage almost all essential tasks in one convenient place.
Lamia Obaid Al Shamsi, Director of the Sharjah Digital Department, highlighted the new features of the upgraded app at Gitex Global 2024. She explained how the second version was designed to enhance user experience with a refined interface, innovative elements, and the integration of artificial intelligence.
"We’ve recently introduced the second version of the platform, featuring a redesigned user interface that aligns with our strategy to improve user experience," said Al Shamsi. "This includes AI-powered live chat for quick government information access, service evaluations, digital payments, and an enhanced services guide. We've also improved existing services like Sharjah Electricity and Water Authority (SEWA) bill payments and public parking fee payments."
A Unified Digital Platform
The app serves as a unified channel for accessing services from local and federal government entities in Sharjah. With the latest technology and flexible features, users can complete processes within minutes, while enjoying top-level security. "This platform represents a significant step towards achieving Sharjah’s vision for digital transformation," Al Shamsi added.
New Dashboard for Personalized Services
One of the major additions in the new version is a personalized dashboard, where users can store and access important documents, such as their Emirates ID, driver’s license, car registration, and more. The dashboard also provides real-time updates on vehicle registration renewals and allows for easy payment of various services.
Digital Documents at Your Fingertips
Through the Digital Sharjah app, residents can access digital versions of key documents, including:
Services Available Through the App
The new app allows Sharjah residents to easily manage the following services:
Upcoming Projects to Transform Life in Sharjah
Looking ahead, Al Shamsi discussed several upcoming projects designed to simplify starting a business, buying or renting property, and accessing useful data in Sharjah. These initiatives are the result of collaborations between various government departments, many of which were showcased at the Sharjah government’s stand at Gitex Global 2024.
Expressing her gratitude, Al Shamsi said, “I want to thank all the government entities that participated in the Sharjah Government Platform at Gitex Global 2024. Their cooperation has been key in showcasing Sharjah’s digital innovations to improve government services and enhance the quality of life for citizens, residents, visitors, and investors alike.”
With the upgraded Digital Sharjah app, handling everyday tasks has never been easier for residents, ensuring a more connected and efficient living experience.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels
As the digital world continues to evolve, Dubai Police have flagged significant concerns over future cyber threats, with biometric data theft and cyberterrorism looming large. Major Tarek Belhoul, head of the virtual assets crime section at Dubai Police, highlighted the growing risks posed by digital crimes during the National Summit on Financial Crime Compliance in Abu Dhabi.
Belhoul emphasized that as economies transition towards digitization, new forms of cybercrime are emerging. These include the poisoning of data and increasing criminal activities in the metaverse. He warned that tampering with data, especially through artificial intelligence (AI), could fuel misinformation and propaganda warfare, a tactic already observed in today's digital landscape. “We see a huge projection of crime in the metaverse and digital space as our economies are transforming into digital economies," Belhoul stated.
One area of concern is the misuse of biometric data, such as fingerprints, iris scans, and facial recognition. Criminals are leveraging these identifiers to impersonate individuals and gain unauthorized access. Additionally, malware, ransomware, and vulnerabilities in IoT (Internet of Things) devices have been exploited repeatedly for financial gain.
Belhoul stressed that while investments in infrastructure are essential, the focus must also shift to empowering individuals and strengthening legislation to combat these evolving threats. He praised the UAE’s proactive stance, becoming the first Arab country with a dedicated unit to combat virtual asset-related crimes.
Protecting Children in the Digital Age
Addressing the growing digital risks children face, Major Belhoul advised parents to spend at least one hour daily with their children to monitor their online activities. He recommended engaging in conversations about their digital interactions rather than restricting device use. "It's crucial for parents to understand who their children are interacting with online, especially when it comes to gaming," he said.
Belhoul revealed that Dubai Police had established a dedicated section to tackle digital crimes involving children, reflecting the increasing dangers within the gaming industry. He urged parents to remain vigilant, as seemingly innocent online activities can sometimes conceal more harmful realities.
The National Summit on Financial Crime Compliance, attended by officials from the UAE, US, Europe, and the GCC, focused on the pressing challenges posed by financial and cybercrimes. Experts discussed strategies to combat these global threats as digital technology continues to evolve.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels
In a bold legal move, SpaceX, the aerospace giant led by Elon Musk, has filed a lawsuit against the California Coastal Commission (CCC), accusing the state panel of imposing politically motivated restrictions that could hinder the company’s rocket launch operations. The lawsuit claims that the Commission’s actions reflect bias and could stymie SpaceX’s efforts to expand its facilities in the state, potentially jeopardizing future rocket launches and other key operations.
Background of the Lawsuit
The conflict stems from the CCC’s regulatory oversight of coastal land use, which includes SpaceX’s rocket launch sites and testing facilities. As SpaceX looks to expand its footprint in California, home to its headquarters and a major hub for its launch activities, the company argues that the Commission’s permitting process has become overly restrictive, with decisions influenced by political considerations rather than legal and environmental factors.
In the lawsuit, filed in federal court, SpaceX alleges that the CCC’s decisions are impeding its ability to secure necessary permits for expanding launch facilities and infrastructure along the California coast. The company contends that the Commission's actions have become unpredictable and inconsistent with previous decisions, pointing to delays and increased regulatory hurdles that could threaten its ambitious space exploration goals.
Accusations of Political Bias
At the heart of the lawsuit is the claim that the CCC has shown political bias against SpaceX, driven by concerns over the environmental impact of rocket launches and other activities. The company argues that the Commission's focus on the environmental risks associated with its operations, particularly in sensitive coastal areas, is disproportionately severe compared to how other industries are treated.
SpaceX’s legal team asserts that the Commission's regulatory stance has evolved into an obstructionist approach, with its members influenced by political pressures from various environmental advocacy groups. These groups have raised alarms about the potential long-term environmental effects of increased rocket launches, including noise pollution, habitat destruction, and the carbon footprint of the space industry.
In its complaint, SpaceX suggests that the Commission's alleged bias is not just environmental but also ideological. Some environmental and political groups have criticized Musk and his companies for their large-scale industrial projects and their sometimes controversial methods of bypassing traditional regulatory hurdles. According to SpaceX, these factors have contributed to a politicized atmosphere that impacts the Commission's decision-making.
Impact on SpaceX Operations
The stakes for SpaceX in this lawsuit are high. The company is in the midst of ramping up its launch activities as it continues to develop its Starship rocket system, a massive spacecraft designed for missions to the Moon, Mars, and beyond. SpaceX has ambitious plans to increase the frequency of its launches and expand its testing facilities, some of which are located on the California coast. Any delays or restrictions on these operations could have significant financial and strategic consequences.
While SpaceX has other launch sites, including its prominent facility in Boca Chica, Texas, its California operations are integral to its overall business model. The company uses its West Coast sites for launching satellites, carrying out military missions, and testing new technology. If the California Coastal Commission continues to restrict or delay permit approvals, SpaceX could face significant operational challenges in meeting its goals for the coming years.
California Coastal Commission's Stance
The California Coastal Commission, established to regulate the state’s coastlines and protect its natural resources, has not yet responded in detail to the lawsuit. However, the panel has historically taken a cautious approach when approving permits for industrial projects along California’s fragile coastline, citing concerns over environmental protection, coastal access, and the long-term sustainability of such developments.
In the past, the CCC has clashed with large corporations seeking to develop or expand facilities in coastal areas, insisting on rigorous environmental reviews and demanding mitigation measures to minimize impact. SpaceX’s rapid expansion and the environmental concerns associated with frequent rocket launches have undoubtedly drawn the Commission's attention.
While the CCC may argue that its decisions are based on lawful environmental considerations, SpaceX insists that the delays and added conditions placed on its permits are not consistent with the level of scrutiny applied to other industries.
Broader Implications
SpaceX’s lawsuit against the California Coastal Commission raises questions about the balance between economic development and environmental stewardship. As one of the most influential players in the rapidly growing space industry, SpaceX’s battle with state regulators could set a precedent for how space companies navigate complex regulatory landscapes in the U.S.
This lawsuit also reflects the broader tensions between Musk’s business empire and regulatory authorities. In recent years, Musk has publicly criticized various government agencies for what he sees as excessive bureaucracy slowing down innovation, particularly in sectors like electric vehicles, space exploration, and tunnelling technology.
For the space industry as a whole, the outcome of this lawsuit could have far-reaching consequences. If SpaceX succeeds in its legal challenge, it may prompt other aerospace companies to push back against regulatory bodies they perceive as barriers to innovation. Conversely, if the California Coastal Commission prevails, it could embolden regulators to enforce stricter environmental oversight on high-tech industries operating near sensitive ecosystems.
Conclusion
As SpaceX embarks on its legal battle with the California Coastal Commission, the case highlights the complexities of balancing ambitious technological advancement with environmental protection and public policy. The outcome will not only shape the future of SpaceX’s operations in California but could also influence how the aerospace industry as a whole interacts with regulatory authorities in the coming years.
For now, SpaceX continues to push forward with its space exploration missions, while also fighting to ensure that its operations in California can expand without what it claims are undue regulatory obstacles. Whether the courts will agree with SpaceX's accusations of political bias remains to be seen, but this case will undoubtedly be watched closely by industry leaders, environmental groups, and regulators alike.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels
The European Union's ground-breaking legislation, the AI Act, is poised to reshape the global landscape of artificial intelligence regulation. With the recent unveiling of the AI Act Checker, a compliance tool designed to help companies navigate the complexities of the new law, it has become clear that Big Tech firms—such as Google, Amazon, Meta, and Microsoft—are facing significant challenges. These pitfalls highlight both the complexities of the AI Act and the difficulty of implementing compliance strategies for AI systems that power the world's largest tech ecosystems.
Understanding the EU AI Act
The EU AI Act, approved in June 2023, is one of the first comprehensive legal frameworks regulating artificial intelligence. It is designed to address the risks associated with AI, setting stringent requirements for AI systems based on their potential for harm. The Act divides AI systems into four risk categories: unacceptable, high, limited, and minimal. High-risk systems, in particular, are subject to strict regulations, including transparency, security, and accountability standards.
This new legal regime covers a broad array of AI applications, from biometric identification and critical infrastructure to healthcare and law enforcement. It mandates thorough documentation, testing, and governance of AI systems to ensure that they are safe, fair, and transparent.
The Role of the AI Act Checker
In response to the growing complexity of compliance, the AI Act Checker was introduced as a regulatory tool to assist companies in evaluating whether their AI systems meet the EU’s stringent requirements. Developed as part of a broader EU initiative to support businesses in complying with the law, this checker allows companies to classify their AI technologies according to risk levels and provides guidance on how to bring their systems into compliance.
The AI Act Checker works by analyzing the functionality and deployment of AI systems within an organization, highlighting areas where the system might fall short of the EU’s standards. For Big Tech firms, whose AI systems are often multi-layered, cross-border, and integrated into billions of users’ daily lives, the checker has revealed significant compliance hurdles.
Big Tech’s Compliance Challenges
1. Managing High-Risk AI Systems
A key challenge for Big Tech companies is the deployment of AI systems that fall into the "high-risk" category. These include facial recognition, credit scoring, and AI used in healthcare or autonomous driving. Under the AI Act, these systems must undergo stringent testing for bias, accuracy, and security. Many of these technologies are integral to Big Tech’s operations, from ad targeting algorithms to AI-powered virtual assistants.
The AI Act Checker has shown that companies like Google and Amazon have multiple high-risk AI applications that may not yet meet the necessary transparency or documentation requirements. For example, AI systems used for biometric identification in facial recognition or automated decision-making tools in recruitment are now subject to rigorous oversight. Companies will need to significantly increase their investments in testing, monitoring, and documenting these systems to avoid heavy fines.
2. Bias and Transparency in AI Algorithms
Another major pitfall for Big Tech is ensuring that their AI systems are free from bias, a core principle of the AI Act. The regulation mandates that companies demonstrate their algorithms are transparent and non-discriminatory, which has been a notorious issue for AI-powered systems in recent years. From facial recognition software that misidentifies individuals based on race to job recruitment algorithms that reinforce gender or racial biases, Big Tech has often been at the center of these controversies.
The AI Act Checker has flagged many of these concerns, indicating that companies may struggle to meet the standards for algorithmic fairness and transparency. Ensuring that AI algorithms are explainable—meaning users and regulators can understand how decisions are made—will require a significant overhaul of how these systems are built and managed.
3. Data Privacy and User Consent
One of the central tenets of the AI Act is its focus on protecting data privacy and ensuring users provide explicit consent for the use of their data in AI systems. Big Tech firms, which process enormous volumes of personal data, will now need to prove that they have obtained proper consent for AI applications that use sensitive data, such as location tracking, health data, or biometric information.
The AI Act Checker has highlighted compliance issues around data usage and user consent. Many AI-driven services, like voice assistants and personalized ad services, rely on massive amounts of personal data, often collected without the level of transparency or user consent now required under the AI Act. Meta, for instance, may face challenges with its AI-powered ad algorithms, which rely heavily on personal data to optimize targeting.
4. Compliance Across Multiple Jurisdictions
For global companies, one of the more complex challenges of the EU AI Act is ensuring compliance across different jurisdictions. While the Act applies to companies offering AI products or services in the EU, it also affects their operations worldwide. Ensuring compliance in the EU, while maintaining operations that may have different standards in the U.S., China, or other regions, will require a delicate balancing act.
Big Tech firms may need to adopt a more global approach to compliance, which could mean adopting EU standards as the default for their AI systems worldwide. This presents logistical and financial challenges, as different regions have varying regulations, and harmonizing AI governance across borders is no small feat.
The Financial and Reputational Impact
Non-compliance with the EU AI Act comes with steep penalties. Companies that fail to meet the regulatory requirements could face fines of up to €30 million or 6% of their annual global revenue, whichever is higher. For Big Tech firms like Google, Meta, and Amazon, this could amount to billions of dollars. Beyond the financial impact, non-compliance could severely damage their reputations, especially given the increasing scrutiny of AI ethics and corporate responsibility.
The EU has positioned itself as a global leader in AI regulation, and other regions, including the United States and Canada, are closely watching how these regulations unfold. Big Tech’s ability to navigate the EU AI Act will likely influence future AI legislation globally, with many countries potentially adopting similar frameworks.
Looking Ahead: What Big Tech Needs to Do
In response to these challenges, Big Tech firms must take proactive steps to address the compliance gaps identified by the AI Act Checker. This will likely include:
Enhanced Governance and Oversight: Companies will need to strengthen their internal AI governance, ensuring that systems are regularly tested for compliance, fairness, and transparency.
Increased Investment in AI Ethics: Addressing bias, algorithmic transparency, and ethical considerations will require Big Tech to invest heavily in AI research and development, particularly in areas like explainable AI and unbiased decision-making.
Cross-Border Coordination: With the global nature of AI, Big Tech firms will need to adopt a cohesive compliance strategy that spans multiple regions, balancing EU requirements with other regulatory frameworks around the world.
Public Accountability: To maintain public trust, companies must be more transparent about how they use AI, including clearer disclosures about data usage and the decision-making processes of their AI systems.
Conclusion
As the EU AI Act Checker begins revealing the compliance pitfalls faced by Big Tech, it underscores the complexities of integrating AI into business operations while adhering to new and stricter regulations. The road to full compliance will be a challenging one, but for companies that succeed, it presents an opportunity to lead in ethical AI development. For Big Tech, navigating this new regulatory landscape will not only determine their future in Europe but could also set the standard for AI governance globally.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels
Imagine completing your daily transactions without needing your wallet or bank cards. Thanks to the new "Your Vein is Your Identity" project by the Federal Authority for Identity, Citizenship, Customs, and Port Security (ICP), this will soon become a reality in the UAE. The ground-breaking palm vein technology showcased at Gitex Global 2024 allows citizens, residents, and visitors to use their palm for identity verification, making tasks like opening a bank account or withdrawing cash more secure and convenient.
Just like fingerprints, the veins in your palm are unique, and this new system will use that uniqueness to verify your identity. Once implemented, the technology will eliminate the need for physical cards or mobile apps for transactions, providing a higher level of security as no visible bank data can be shared or stolen.
How Will the Palm Vein System Work?
To start using this innovative technology, users will need to register their palm vein through the ICP’s system, which will link it to their Emirates ID. The registration process is quick and easy. Once linked, users can access a variety of services across government, semi-government, and private sector entities, thanks to the integration of databases across these departments through ICP’s enterprise system.
Key Uses of Palm Vein Technology
This palm vein project marks a significant step in making everyday transactions faster, safer, and more convenient in the UAE.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels
Dubai is preparing to introduce a ground-breaking travel experience that will allow passengers to move through its airports without the need for physical documents. According to Lieutenant Colonel Khaled bin Madia Al Falasi, Deputy Assistant Director for Smart Services at the General Directorate of Residency and Foreigners Affairs in Dubai (GDRFA Dubai), the new initiative, titled ‘Travel Without Borders’, will make use of cutting-edge artificial intelligence (AI) technology.
Facial recognition cameras will scan travelers' faces as they walk through the airport, eliminating the need for traditional passport control checks or smart gates. The system will verify passengers' biometric data on the move, confirming their identity and officially registering their arrival or departure without requiring any stops.
This innovation marks a significant advancement in Dubai’s efforts to enhance efficiency and provide a seamless, document-free experience for travelers passing through its airports. The initiative is part of the emirate's ongoing commitment to integrating smart technology into everyday processes, further positioning Dubai as a global leader in modern travel infrastructure.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels
In an exciting development unveiled at the Gitex Global 2024 tech event in Dubai, the UAE introduced the 'UAE Fast Track' app, which will allow visitors to register their data before arriving in the country, significantly simplifying the immigration process. This cutting-edge initiative will enable non-resident arrivals to pass through smart gates, bypassing the traditional queues at immigration and passport control, thus offering a faster and more efficient entry experience.
Revolutionizing the Entry Process
Major General Suhail Saeed Al Khaili, Director-General of the Federal Authority for Identity, Citizenship, Customs, and Port Security (ICP), made the announcement at the tech show. He emphasized that the app is designed to eliminate the need for physical registration upon arrival, providing visitors with a seamless entry into the UAE.
Instead of the current procedure that involves lining up at immigration counters, the app will allow users to pre-register their personal information and travel details, enabling them to pass through the smart gates upon landing. The process is designed to be hassle-free and reduce waiting times, offering a smoother and faster experience for visitors.
Enhancing UAE's Digital Leadership
The 'UAE Fast Track' app is part of a broader effort by the UAE to solidify its position as a global leader in digital transformation and innovation. Major General Al Khaili highlighted that this initiative reflects the country's ongoing commitment to enhancing visitor experiences by reducing bureaucracy, streamlining procedures, and leveraging cutting-edge technology to facilitate ease of travel.
As the UAE continues to prioritize digital innovation, the app not only aligns with the country's forward-thinking approach but also enhances its global standing in terms of technological advancements in travel and tourism. The initiative is expected to benefit millions of visitors each year, whether for business or leisure, by making their entry into the country more convenient.
A Smooth and Comfortable Travel Experience
In addition to shortening wait times, the 'UAE Fast Track' app offers several other advantages. Visitors will no longer need to physically register at immigration stations upon arrival. Instead, by using the app to submit the necessary data beforehand, they will experience a much smoother and faster transition from landing to exiting the airport.
This innovation aims to make travel to the UAE more attractive by offering a high level of convenience for tourists and business travelers. By enhancing the overall travel experience, the UAE is set to maintain its appeal as a leading global destination for visitors from all over the world.
Conclusion
With the launch of the 'UAE Fast Track' app, the UAE has taken another significant step toward revolutionizing its travel and tourism infrastructure. By enabling visitors to pre-register their details and use smart gates, the app will streamline entry procedures and improve efficiency at airports across the country.
As part of the UAE's commitment to innovation and digital transformation, this new development not only enhances convenience for travelers but also reinforces the country's leadership in embracing technology to improve essential services. The 'UAE Fast Track' project promises to make visits to the UAE easier, more efficient, and more comfortable than ever before.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels
Elon Musk has expressed his delight after SpaceX successfully returned its fifth Starship test flight to its Texas launch pad, marking another significant step toward revolutionizing space travel.
The test flight on Sunday was particularly noteworthy as it marked the first time the rocket’s towering first-stage booster, known as the "Super Heavy," returned using giant metal arms to secure its landing. This engineering milestone is part of SpaceX’s broader mission to develop fully reusable spacecraft capable of undertaking missions to the Moon, Mars, and beyond.
Liftoff occurred at 7:25 AM CT from SpaceX's Boca Chica facilities, where the Super Heavy booster propelled the Starship second stage rocket towards space. After reaching an altitude of approximately 70 kilometers, the booster separated and began its controlled descent back to Earth. In a carefully orchestrated maneuver, the booster reignited three of its 33 Raptor engines to slow its descent, guiding itself back to the launch site.
The towering 71-meter Super Heavy booster descended into the launch tower’s arms, securing itself using four forward grid fins to steer through the air. This is the first time SpaceX has successfully caught the massive rocket with the tower's metal arms, a feat that will play a crucial role in making future missions more efficient and cost-effective.
“This landing brings us one step closer to Mars,” Musk shared on social media, celebrating the achievement and the continued progress in SpaceX’s goal of creating reusable rockets that will make space travel more accessible and sustainable.
The successful landing marks a critical moment for SpaceX as the company continues pushing the boundaries of rocket reusability, a key factor in making deep-space exploration more affordable. With the development of the Starship system, SpaceX is positioning itself as a leader in space exploration, with plans to use the vehicle for crewed missions to the Moon and Mars in the coming years.
As SpaceX continues to achieve engineering breakthroughs, the dream of sending humans to other planets is moving closer to reality.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels
The Authors Guild has teamed up with the online platform Created by Humans to launch a partnership aimed at enabling authors to license their works to AI developers, ensuring they maintain control over how their content is used.
As the largest professional organization for writers in the US, the Authors Guild is working to protect and promote authors' rights in the face of rapid AI advancements. The partnership is designed to put authors "in the driver's seat" when it comes to AI licensing, allowing them to decide if, when, and how AI companies use their works. This move comes amid legal battles involving AI companies, such as OpenAI, which have faced lawsuits from authors and media organizations for allegedly using copyrighted material without permission to train large language models (LLMs).
The platform will offer authors a clear path to control, manage, and monetize their content, providing AI developers with access to high-quality, curated written works—fully authorized by the rightsholders. Mary Rasenberger, CEO of the Authors Guild, emphasized that this initiative offers authors a way to engage with AI platforms on their own terms, ensuring they are fairly compensated for the use of their works.
As generative AI technology becomes increasingly prevalent, Rasenberger highlighted the urgency of returning control to authors and their publishers, stating that licensing is the key to achieving this. She pointed out that, while licensing deals are already being made between publishers and AI companies, authors themselves have often been left out of these discussions.
Created by Humans co-founder and CEO, Trip Adleris, described the collaboration as a way to build ethical AI systems that respect creators' rights while advancing technology. The platform will open for author and publisher registration later this year, with plans to offer licenses to AI companies by early 2025, providing a new revenue stream for authors and enabling AI developers to access authorized, accurate content.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels
Thesis, a cryptocurrency venture studio backed by Andreessen Horowitz, has appointed Katherine Snow as its new general counsel. Snow transitions from her role as chief legal officer at crypto data and research platform Messari, where she led the company’s global legal strategy and policy initiatives.
In her new position at Thesis, Snow will guide the legal team and navigate the regulatory challenges that impact the cryptocurrency and blockchain sectors. Thesis, known for building Bitcoin-related brands, includes platforms like Fold, a payments solution allowing users to spend Bitcoin in everyday transactions. The company is supported by prominent investors, including Fenbush Capital and Polychain Capital.
Matt Luongo, CEO of Thesis, praised Snow's expertise, stating, "Katherine’s deep understanding of fintech and blockchain regulations is crucial as we continue expanding our ecosystem. Her strategic insight will ensure Thesis remains innovative while effectively managing the global regulatory environment."
Snow brings a wealth of experience to Thesis, with nearly three years at Messari and prior roles as associate general counsel at Binance.US and a stint in Cooley’s blockchain and tokenization group. She began her legal career at Sherman & Howard before transitioning into the blockchain space.
Expressing her enthusiasm for the new role, Snow commented, "I’m thrilled to join Thesis at such a critical time for both the company and the blockchain industry. I look forward to helping the team tackle regulatory challenges while pushing forward innovative solutions in decentralized finance."
This move follows other notable appointments in the crypto sector. In August, crypto exchange Bitget appointed former Binance general counsel Hon Ng as its first chief legal officer. After Ng's departure from Binance in July 2023, Eleanor Hughes, a former Skadden Arps lawyer, was promoted to Binance's general counsel, overseeing legal operations in the Asia Pacific, Middle East, and North Africa regions.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels
In collaboration with First Abu Dhabi Bank (FAB), the Securities and Commodities Authority (SCA) has introduced a new e-service for shareholders to claim unclaimed dividends from locally listed public joint stock companies dating back to before March 2015.
This initiative is part of the UAE government's efforts to enhance public service quality, making processes more efficient and convenient for users. The e-service offers diversified channels, allowing shareholders to submit and track their payback requests through the FAB website. Once the necessary documents are submitted, FAB will review the request and transfer the dividends to the shareholder's account within ten business days.
SCA remains committed to simplifying the process for investors to retrieve their unclaimed dividends, reflecting the government's dedication to meeting public needs and maintaining its reputation for delivering world-class services.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels
In a significant legal development, the UK Court of Appeal has overturned a High Court ruling, granting Xiaomi the right to an interim licence to use Panasonic’s standard essential patents (SEPs) pending the determination of a global fair, reasonable, and non-discriminatory (FRAND) licence. The judgment, delivered by Lord Justice Richard Arnold on 3 October, was hailed as "groundbreaking" by Xiaomi’s legal team, led by Kirkland & Ellis.
The Dispute
The legal battle between Chinese tech giant Xiaomi and Japanese multinational Panasonic centers on licensing terms for Panasonic’s 3G and 4G patents. Proceedings began in July 2023, with Panasonic seeking an injunction and a declaration of infringement. Unable to agree on FRAND terms, the matter escalated to the UK courts, with parallel infringement cases also underway in the Unified Patent Court (UPC) and German courts in Munich and Mannheim.
Xiaomi had proposed taking an interim licence and paying royalties to Panasonic while waiting for the final decision from the Patents Court. However, Panasonic refused, prompting Xiaomi to seek court intervention.
Court of Appeal's Decision
The Court of Appeal, led by Lord Justice Arnold and supported by Lord Justice Moylan, found Panasonic’s refusal to negotiate an interim licence "indefensible." The court ruled that a willing licensor in Panasonic’s position would have entered into such an agreement, especially since both companies had agreed to follow the English court’s determination of FRAND terms.
The court criticized Panasonic for attempting to coerce Xiaomi into accepting more favorable terms through the threat of injunctions in foreign courts. It stated that Panasonic’s conduct violated its obligation under the European Telecommunications Standards Institute (ETSI) rules to negotiate in good faith and avoid pressuring Xiaomi through exclusionary measures.
Key Points of the Judgment
Lord Justice Phillips, while agreeing that Panasonic's conduct was "indefensible," expressed doubt that Panasonic was obligated to enter into an interim licence on terms not yet proven to be FRAND.
Legal Representation
Xiaomi was represented by Kirkland & Ellis, with partners Nicola Dagg, Jin Ooi, and Steve Baldwin leading the case. Panasonic’s legal team included Blackstone Chambers' Andrew Scott KC and 8 New Square’s Isabel Jamal, instructed by Bristows.
Conclusion
The decision marks a significant moment in SEP litigation, with the UK courts stepping in to protect Xiaomi from undue pressure by granting an interim licence. This ruling sets a precedent for future FRAND disputes, emphasizing the importance of good faith negotiations and fair treatment of licensees in the global tech landscape. The FRAND trial is scheduled to begin on 31 October 2024, presided over by Lord Justice Meade.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels
The global LegalTech market is set to experience significant growth in the coming decade, driven by advancements in artificial intelligence (AI), automation, and increased demand for efficient legal solutions. According to Future Market Insights, the market, valued at USD 29.60 billion in 2024, is projected to reach USD 68.04 billion by 2034, registering a robust compound annual growth rate (CAGR) of 8.7%.
Key Drivers of Growth
The rapid digital transformation within the legal industry, the need for cost-effective operations, and regulatory changes are some of the primary factors contributing to the expansion of the LegalTech market. Automation and AI-driven technologies are reshaping legal services, allowing law firms, corporate legal departments, and government organizations to streamline operations and enhance decision-making processes.
The adoption of AI and machine learning is revolutionizing various legal tasks such as document review, contract drafting, case research, and litigation support. These tools are enabling faster, more accurate legal processes while reducing manual workloads. Additionally, the need for regulatory compliance in industries such as finance, healthcare, and corporate governance is driving the demand for LegalTech solutions that help businesses stay compliant with increasingly complex regulations.
UAE Perspective
The UAE is emerging as a significant player in the global LegalTech market due to its commitment to innovation and digital transformation across industries. As part of its broader economic vision, the UAE is integrating advanced technologies into its legal framework, making it easier for legal entities to adopt digital solutions.
AI-driven legal platforms and blockchain technology are gaining traction in the UAE’s legal industry, as they offer greater transparency, efficiency, and cost reduction. The growing interest in cybersecurity solutions for legal platforms is another key trend, given the country's emphasis on protecting digital infrastructure.
With the UAE’s focus on becoming a global hub for technology and business, the LegalTech market in the region is expected to witness increased adoption among law firms, corporate legal departments, and government bodies. The country's drive for regulatory compliance, coupled with its ambitions for innovation in legal processes, makes it a crucial player in the global LegalTech landscape.
Global Trends and Opportunities
Globally, the LegalTech sector is evolving rapidly, with AI, blockchain, and machine learning becoming essential components of modern legal services. The market is seeing a surge in demand for solutions like contract lifecycle management, e-discovery, legal analytics, and compliance platforms.
One of the major trends shaping the market is the increasing interest in blockchain for legal contracts and documentation, offering secure and transparent ways to manage legal agreements. The integration of AI in legal research and compliance management is also transforming how law firms and businesses handle legal tasks, making them more efficient and accurate.
The LegalTech market presents significant growth opportunities in emerging markets, particularly in regions such as Latin America, Asia-Pacific, and the Middle East. Small and medium enterprises (SMEs) are also expected to contribute to the market’s expansion, as they adopt technology-driven solutions to reduce costs and improve legal operations.
Market Leaders and Competitive Landscape
Thomson Reuters continues to lead the global LegalTech market with its comprehensive suite of legal software solutions and AI-driven platforms. Other major players include RELX Group, Clio, Litera, Wolters Kluwer, and iManage, all of which are expanding their portfolios through innovative technologies and strategic acquisitions.
These companies are driving the adoption of AI, automation, and cloud-based solutions, which are increasingly favored for their scalability, enhanced security features, and ability to support remote working—a trend amplified by the COVID-19 pandemic.
Market Segmentation and Regional Outlook
The LegalTech market is segmented into various solutions, including cloud-based and on-premises platforms, case management, document management, contract lifecycle management, and billing and accounting systems. Law firms and corporate legal departments are the primary end-users, with a growing demand for integration and consulting services to implement these technologies effectively.
Geographically, North America remains the largest market for LegalTech solutions, followed by Europe. However, regions like East Asia, South Asia, and the Middle East & Africa (MEA) are expected to witness significant growth in the coming years, as legal entities in these regions increasingly adopt technology-driven solutions to meet rising regulatory demands and improve operational efficiency.
In conclusion, the global LegalTech market is on a path of rapid expansion, fueled by the increasing adoption of AI, automation, and digital transformation in legal services. The UAE, with its focus on innovation and regulatory compliance, is well-positioned to play a key role in the sector’s growth, both regionally and globally.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels
Bitcoin, long known for its dramatic price swings, has surprised investors in 2024 by exhibiting a level of stability previously unseen in its history. As the largest cryptocurrency by market capitalization remains range-bound between $65,000 (Dh238,745) and $70,000 (Dh91,826), this newfound stability is causing many to reconsider their views on whether Bitcoin can now be considered a safer, long-term investment.
Bitcoin's History of Volatility
Since its inception in 2009, Bitcoin has been synonymous with extreme volatility. Early adopters witnessed the cryptocurrency skyrocket from mere cents to thousands of dollars in just a few years, while skeptics watched the value plummet just as quickly during market corrections. These wild price fluctuations were often linked to regulatory concerns, security breaches on exchanges, or broader economic factors affecting investor sentiment. As a result, Bitcoin has been viewed as a high-risk asset class, appealing mostly to speculative investors seeking quick profits.
However, in 2024, Bitcoin has largely traded within a narrow price range, maintaining a level of consistency that has surprised many market analysts. This newfound stability raises a critical question: Has Bitcoin matured to the point where it is now a viable long-term investment?
What’s Behind the Stabilization?
Several factors contribute to the current stability in Bitcoin’s price. Firstly, broader adoption of the cryptocurrency, both by institutional investors and by major corporations, has lent Bitcoin a degree of legitimacy and reduced the speculative swings that once defined it. In addition, as more financial products linked to Bitcoin—such as exchange-traded funds (ETFs) and futures contracts—become available, investors now have more sophisticated tools to manage their exposure, leading to a less volatile market.
Moreover, regulatory clarity in key markets like the U.S. and the European Union has eased concerns about government crackdowns, which have historically caused panic selling among investors. As global financial institutions increasingly view Bitcoin as a store of value or a hedge against inflation, the asset class is experiencing more widespread acceptance, stabilizing its price.
Is Bitcoin Becoming a Safe Investment?
The reduction in Bitcoin’s volatility has prompted fewer analysts to take a polarizing stance on its viability as an investment. In past years, financial experts were often divided into two camps: those who believed Bitcoin was a bubble destined to burst, and those who viewed it as the future of money and a hedge against inflation. Now, the middle ground is becoming more populated, as even former skeptics are acknowledging the cryptocurrency’s growing resilience.
“Bitcoin has shown a remarkable ability to weather market turbulence and maintain a strong value proposition as a decentralized asset,” said one cryptocurrency analyst. “With price fluctuations becoming more subdued, Bitcoin is transitioning from being a speculative asset to a more stable form of digital gold.”
That said, experts caution that while Bitcoin's volatility has decreased, it is far from a "risk-free" investment. Cryptocurrencies remain vulnerable to external forces such as changes in regulation, technological disruptions, or macroeconomic trends. Nonetheless, the improved stability has made Bitcoin a more attractive option for investors who were previously deterred by its unpredictability.
Institutional Investors on Board
One of the key drivers of Bitcoin’s recent stability is the growing participation of institutional investors. Large financial firms, hedge funds, and even pension funds are increasingly allocating a portion of their portfolios to Bitcoin. This influx of capital has contributed to less erratic price movements, as institutional players are generally more focused on long-term gains rather than short-term speculation.
Major corporations are also adding Bitcoin to their balance sheets, seeing it as a hedge against inflation and currency devaluation. This corporate interest further strengthens Bitcoin’s position as a mainstream financial asset, fostering confidence among individual investors who may have once viewed it as a fringe investment.
Regulatory Developments
Regulation has long been a significant factor in Bitcoin’s price movements. In the early years, the threat of government crackdowns or the outright banning of cryptocurrency transactions could send prices into a tailspin. However, 2024 has seen increased regulatory clarity in many major markets. Governments are now implementing clear frameworks that allow for the responsible use of cryptocurrencies, reducing the uncertainty that once caused panic in the market.
This regulatory transparency has encouraged more investors to enter the market, knowing that their investments are safeguarded by legal protections. As countries continue to develop and refine their cryptocurrency regulations, Bitcoin could become even more stable, potentially cementing its status as a long-term investment vehicle.
The Future of Bitcoin: Is It Truly Risk-Free?
While Bitcoin’s recent stability is a promising development, experts urge caution. Cryptocurrency markets are still relatively young, and Bitcoin’s price could remain susceptible to factors like regulatory changes, technological advancements, or shifts in investor sentiment. Although Bitcoin may no longer be the wild rollercoaster it once was, it remains an asset with inherent risks.
Investors considering Bitcoin as part of their portfolio should carefully weigh its potential for growth against the risks that come with investing in a digital currency. The current trend of reduced volatility may continue, but Bitcoin’s future trajectory is far from guaranteed.
Conclusion
Bitcoin's transformation in 2024, from a notoriously volatile asset to one with more consistent price movements, has marked a significant turning point in its evolution as a financial asset. While the cryptocurrency is not entirely risk-free, its stability is making it a more attractive option for long-term investors. With increased institutional adoption, regulatory clarity, and reduced price swings, Bitcoin may finally be shedding its reputation as a speculative asset and evolving into a reliable investment option.
As always, investors should remain vigilant and stay informed about potential risks, but Bitcoin’s recent performance suggests that it is indeed moving toward becoming a more stable asset class in the global financial landscape.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels
The UAE's Ministry of Economy unveiled the National Economic Register (Growth) on September 30, a platform that serves as the largest economic database in the country. The platform offers a unified, real-time source of information on all commercial licenses for companies operating across the UAE.
The "Growth" platform allows businesses and investors to explore economic activities, as well as commercial and investment opportunities in various sectors. It also supports government efforts to reduce bureaucracy and improve the efficiency of public services using advanced AI technologies.
Abdulla bin Touq Al Marri, Minister of Economy, emphasized that the platform is the first of its kind in the UAE, providing a reliable, comprehensive database on business licenses for over 4,000 economic activities across the seven emirates. It enables users to verify company information, analyze market trends, and access accurate statistics for informed decision-making and policy development.
In addition to aiding businesses, the platform allows government entities to manage economic activities digitally, further enhancing the competitiveness of the UAE’s economic landscape.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels
Germany’s Federal Cartel Office (Bundeskartellamt) has announced plans to intensify its oversight of Microsoft, utilizing its expanded powers to regulate large tech firms. This move comes in response to concerns over Microsoft's market dominance in cloud computing, operating systems, and software, which could potentially stifle competition. The Cartel Office aims to ensure fair competition by monitoring whether Microsoft is using its influential market position unfairly.
This regulatory focus on Microsoft follows similar actions against other tech giants, including Amazon, Google, and Meta. These measures are part of broader efforts in the European Union to regulate major digital platforms and ensure the digital economy remains competitive and innovation-friendly. The German watchdog has a history of investigating anti-competitive practices, and its decision to scrutinize Microsoft is seen as part of its broader goal of curbing the power of major tech players.
The Importance of Microsoft's Cooperation
The success of this heightened scrutiny largely depends on Microsoft’s cooperation with the Federal Cartel Office. Microsoft has stated its willingness to engage with regulators and uphold competition laws. However, the tech giant’s ongoing regulatory compliance will be critical in determining the outcome of these investigations.
Microsoft’s expanding role in cloud computing and software solutions raises concerns among regulators over the possibility of market abuses. The Cartel Office will be evaluating whether Microsoft’s market practices are giving it undue advantages over competitors, particularly smaller firms that may be disadvantaged in a market dominated by a few key players.
Broader Implications for Big Tech in Europe
Germany’s actions against Microsoft are consistent with the European Union’s broader push to regulate Big Tech companies. The Digital Markets Act (DMA), passed in 2022, introduced significant obligations for large online platforms, aiming to curb monopolistic practices. With Microsoft now under similar scrutiny, the landscape for tech companies in Europe could see further shifts as competition authorities implement stricter oversight.
The Federal Cartel Office’s decision to prioritize Microsoft's case also reflects growing awareness of the need to foster innovation by preventing dominant companies from leveraging their position to block competitors. As Germany and the EU continue to refine their competition policies, tech firms like Microsoft will likely face ongoing regulatory pressure.
Looking Forward: Microsoft’s Future in the German Market
As the scrutiny continues, the question remains whether Microsoft will need to alter its business practices to comply with the new regulatory environment. German regulators will continue to assess the company’s influence over key market sectors, and their findings could lead to further actions, including fines or operational changes for Microsoft.
Overall, this latest development signals Germany’s commitment to ensuring a level playing field in the tech industry, where companies of all sizes can thrive without undue influence from dominant players like Microsoft.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels
The Brazilian Supreme Court has ruled that X, formerly known as Twitter, must pay fines for failing to comply with a court order requiring the appointment of a legal representative in Brazil. This decision comes after a prolonged legal dispute between the social media platform and Brazilian authorities, who have been pushing for compliance with local laws governing foreign companies operating in the country.
The Supreme Court’s ruling effectively means that X will be unable to resume its full activities in Brazil until the fines are paid and the company adheres to the legal requirement of having a local representative. This representative would serve as the company’s point of contact with the Brazilian government, ensuring that X complies with national regulations.
Brazil has been strict in enforcing its digital laws, particularly with foreign tech companies, as part of its efforts to regulate online content and hold platforms accountable for any legal issues that arise. The Brazilian authorities have expressed concerns over the role of social media in spreading misinformation, hate speech, and other harmful content, which has led to increased pressure on companies like X to conform to local laws.
X's parent company, now under the leadership of Elon Musk, has faced various legal challenges worldwide as it rebrands and restructures its operations. In Brazil, this non-compliance has resulted in fines, and the company must now act quickly to appoint a representative and settle the fines to restore its standing in the country.
It remains to be seen how X will navigate this legal hurdle, but the ruling sends a strong message that Brazil is serious about enforcing its regulations on international companies operating within its borders.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels
Microsoft Corporation is setting up an engineering development center in Abu Dhabi, marking the first of its kind in the Arab world. The center will focus on innovations in artificial intelligence (AI), cloud technologies, and advanced cybersecurity solutions. This initiative follows Microsoft’s earlier strategic investment of $1.5 billion in the UAE government-backed AI firm, G42.
The new development center aims to bring cutting-edge technologies to the region, positioning Abu Dhabi as a leader in digital innovation. Sheikh Khaled bin Mohamed bin Zayed Al Nahyan, Crown Prince of Abu Dhabi and Chairman of the Abu Dhabi Executive Council, highlighted the significance of the center, stating, “Abu Dhabi’s advanced digital and physical infrastructure, combined with the UAE’s strategic location at the heart of the world, enables us to drive positive, far-reaching impacts across industries and societies alike.”
Microsoft Chairman and CEO, Satya Nadella, emphasized that the Abu Dhabi center will attract new talent to the region and help spur innovation, driving economic growth and creating jobs for both the UAE and global markets.
In addition to this development, Microsoft and G42 recently announced plans to open two centers in Abu Dhabi dedicated to "responsible" AI. These centers will focus on ensuring the safe development, deployment, and use of generative AI models and applications.
Meanwhile, MGX, a technology investment company based in Abu Dhabi and founded by Mubadala and G42, along with Microsoft, BlackRock, and Global Infrastructure Partners, launched an AI infrastructure investment partnership. This partnership aims to mobilize up to $100 billion to advance the future of AI.
According to a PwC Middle East report, AI could contribute up to $96 billion to the UAE economy by 2030, accounting for 13.6% of the nation's GDP. This initiative by Microsoft is expected to play a significant role in achieving that growth.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels
In the digital age, where technology is rapidly evolving, the convenience of Artificial Intelligence (AI) tools, particularly chatbots like ChatGPT, has become indispensable for many users. These tools are being used for a wide variety of tasks such as preparing research, drafting emails, and even writing articles. However, a top Dubai Police official has issued a strong warning against the growing trend of sharing personal and sensitive information with these platforms.
The Risk of Oversharing
In an exclusive interview with Gulf News, Major Abdullah Al Sheihi, Acting Director of the Cyber Crime Department at Dubai Police, emphasized the potential dangers of oversharing information on AI-powered platforms. He stressed that while AI chatbots are increasingly being used for various purposes, users often fail to recognize the inherent risks of divulging personal data to these applications.
“AI applications have become very important to a huge number of users,” Major Al Sheihi noted. “They are relied upon for research, writing, email responses, and even managing everyday tasks. However, there is a downside that users must be aware of. These AI platforms, though designed to assist, can pose a significant threat to privacy and security if misused.”
The Danger of Trusting AI
The official pointed out that chatbots, such as ChatGPT, may appear to be harmless and trustworthy but are designed to analyze large amounts of data, including potentially sensitive or personal information provided by users. While these tools are intended to provide accurate responses based on user queries, they can inadvertently collect and store personal data, putting users at risk of cybercrime, identity theft, and data breaches.
“There is a misconception among users that these tools are completely secure,” Al Sheihi explained. “In reality, AI chatbots could store data that might be accessed or exploited by cybercriminals, especially if proper security protocols are not in place by the developers. It’s essential that people avoid sharing personal information such as addresses, phone numbers, or financial details with these platforms.”
Dubai Police's Cybercrime Warnings
Dubai Police have been at the forefront of raising awareness about the threats posed by cybercrime and how technological advancements can be exploited by malicious actors. Major Al Sheihi emphasized that the cybercrime landscape is constantly evolving, and criminals are increasingly leveraging AI tools to target unsuspecting individuals. Chatbots and AI platforms can become valuable assets in their toolkit, capable of gathering sensitive data through seemingly innocent interactions.
To protect users from these emerging threats, Dubai Police have launched various campaigns to educate the public on the risks associated with online platforms, including AI tools. The police urge individuals to exercise caution and avoid disclosing personal or sensitive information in interactions with AI applications.
Practical Tips for Users
To mitigate the risks of data misuse and cybercrime, Dubai Police have outlined several precautionary measures that users should adopt when engaging with AI tools like ChatGPT:
Limit the Sharing of Personal Information: Avoid sharing personal identifiers such as your full name, address, phone number, or banking details when using AI chatbots.
Verify the Security of Platforms: Before using an AI tool, research its developer and ensure the platform follows robust security measures to protect user data.
Use AI Responsibly: While AI tools can be incredibly useful, they should be used with caution. Rely on them for general tasks but refrain from using them for confidential or sensitive matters.
Stay Informed: Keep up with updates and alerts from cybersecurity experts and local authorities about the latest online threats, especially those related to AI tools.
Report Suspicious Activity: If you suspect your data has been compromised through an AI platform, report it immediately to the appropriate authorities, such as Dubai Police’s Cyber Crime Department.
A Global Concern
The concerns raised by Dubai Police are not isolated. Globally, cybersecurity experts have highlighted the potential risks associated with AI tools, which have grown in popularity but are still in the process of being fully regulated. As the use of AI continues to expand across industries, governments, and law enforcement agencies worldwide are grappling with how best to protect users’ privacy while encouraging the responsible use of these powerful technologies.
Conclusion
As AI technology continues to permeate daily life, its advantages are undeniable, but so are its risks. Dubai Police’s warnings highlight the importance of being vigilant and responsible while using AI applications. Users must recognize that their personal data, once shared, may be vulnerable to misuse. By following precautionary measures and staying informed about the latest cybersecurity threats, individuals can better protect themselves in an increasingly AI-driven world.
Dubai Police remains committed to ensuring the safety of its citizens and residents, encouraging everyone to be cautious when interacting with AI tools and reminding the public that the convenience of technology should never come at the expense of security.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels
Amazon.com Inc. has unveiled a new artificial intelligence (AI) assistant, codenamed Project Amelia, designed to help online merchants navigate and enhance their business operations on the platform. The announcement came during Amazon's annual Accelerate conference, marking another major step in the company's efforts to stay ahead in the competitive AI landscape alongside tech giants Microsoft, Google, and OpenAI.
A New AI-Driven Era for Amazon Sellers
The introduction of Amelia aims to simplify and streamline the often-complex process of selling on Amazon, particularly for smaller merchants who may not have the resources to manage every detail of their online business. Amelia is capable of answering a wide range of questions, from how sellers should prepare for peak shopping periods like the holiday season to offering optimized product listing suggestions.
Amelia’s design focuses on practical, real-time assistance for merchants. It can generate product descriptions, create or modify images, and even help sellers develop product videos—an increasingly important tool for engaging customers in the e-commerce space. The AI assistant was introduced as part of a suite of tools Amazon is rolling out to help its marketplace sellers, who contribute to the majority of sales on the platform.
Building on Amazon’s AI Foundations
Amazon’s push to integrate AI across its platform reflects the broader competitive landscape in the tech industry. The company has been increasingly relying on AI to enhance both customer and seller experiences. Amelia is built atop Bedrock, a software platform that simplifies access to large language models from third parties as well as Amazon's own proprietary models.
During a demonstration at the Accelerate conference, Amelia was shown helping sellers generate bullet points about their product lines and providing recommendations. Over time, the AI assistant is expected to become more personalized and anticipatory, adapting to the needs of individual merchants. Mehta also stated that Amelia will eventually be able to take certain actions autonomously on behalf of the sellers.
Amazon’s Larger AI Ambitions
Amazon has been heavily investing in AI-powered solutions across its platform. Recently, it introduced Amazon Q, a workplace chatbot designed to assist corporate clients with searching for information, writing code, and reviewing business metrics. Meanwhile, Rufus helps consumers with product comparisons on the Amazon website.
For marketplace sellers, in particular, Amazon has rolled out various AI-driven tools aimed at optimizing product listings and improving business operations. These include software that helps sellers enhance their listings, create more compelling imagery, and, as of Thursday’s announcement, tools for creating product videos. Amelia is currently available in beta for a select group of sellers and is set to roll out across the U.S. in the coming month, with plans for international availability by the end of the year.
AI and Seller Autonomy
While Amelia is seen as a tool to enhance merchant autonomy, Amazon’s relationship with its third-party sellers has often been criticized for being overly reliant on algorithms. Many sellers have expressed frustration over account suspensions due to algorithmic errors, which they say can occur without explanation or proper recourse.
In response to such concerns, Amazon demonstrated at the conference how Amelia would handle common seller issues. For instance, if a product shipment is missing from Amazon’s records, Amelia would attempt to solve the problem. If unable to do so, it can escalate the issue to Amazon’s support team, showing a clear recognition of the need for more human interaction in the automated seller relationship.
Legal Opinion: Implications of AI Assistance for Sellers
While AI tools like Amelia promise to provide substantial benefits to online merchants, there are several legal and operational concerns that sellers should be aware of. One major issue is liability—if Amelia provides incorrect guidance or if an automated action taken on behalf of a seller leads to a negative outcome, such as a financial loss or breach of contract, who would bear the responsibility? Amazon’s terms of service likely include limitations on liability for these AI-driven tools, but merchants should thoroughly review these terms to understand the legal risks.
Furthermore, the increasing use of AI in Amazon's marketplace raises privacy and data protection concerns. Sellers should be vigilant about what data is shared with Amazon’s AI systems, especially sensitive business information that could potentially be exposed to unauthorized parties. Additionally, sellers should ensure compliance with international data protection regulations, such as the General Data Protection Regulation (GDPR), if they are operating in or serving European customers.
Finally, there is the broader question of fair competition. As Amazon continues to automate and streamline seller processes through AI, smaller businesses could be at a disadvantage if they lack the technological literacy or resources to fully utilize these tools. Regulatory authorities may need to examine whether the increasing reliance on AI in marketplaces like Amazon creates barriers to entry or unfairly benefits larger, more tech-savvy sellers.
Conclusion
Amazon’s launch of Amelia, an AI assistant designed to simplify the selling experience for merchants, represents a significant leap in the company's use of artificial intelligence. By offering personalized support for managing product listings, preparing for key sales seasons, and troubleshooting issues, Amelia has the potential to make selling on Amazon easier and more efficient. However, merchants should be aware of the potential legal and operational risks associated with relying on AI-powered tools and take the necessary precautions to safeguard their businesses.
As Amazon continues to expand its suite of AI solutions, the dynamics of online selling are likely to evolve further, making it essential for merchants to stay informed about both the benefits and risks that these new technologies present.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels
In today’s rapidly digitizing world, it isn’t merely boardroom pressures that keep the chief executives of global financial institutions awake at night. Instead, it’s the growing concern over cybersecurity risks that threatens the very core of their operations. For banks managing trillions of dollars in assets, the rise of digital technologies has also meant an increasing number of cyber threats that traditional measures are struggling to contain.
Jane Fraser, the CEO of Citigroup, succinctly captures this anxiety, stating that cybersecurity risks are the ones “you can't really control.” Despite significant investments aimed at mitigating these risks, Fraser and many of her counterparts across the financial services industry acknowledge that cyber threats remain a top concern.
This sentiment is echoed in the UAE, where Ahmed Abdelaal, CEO of Mashreq Bank, highlights cybersecurity as the number one threat facing financial institutions today. "If I am not paying equal attention to this important front, then I am not doing my job," he asserts, emphasizing that while innovation and business expansion are vital, neglecting cybersecurity can undermine an institution’s entire operation.
The increasing interconnectedness of global finance, coupled with the introduction of technologies like the Internet of Things (IoT), machine learning, and artificial intelligence, has exposed financial institutions to vulnerabilities that they never faced before. For banks in the UAE and beyond, the stakes are higher than ever as cyber criminals become more sophisticated.
The Financial Sector as a Prime Target
Financial institutions are especially attractive to cybercriminals due to their vast monetary resources and the immense amounts of personal data they store. James Maude, CTO of BeyondTrust, notes, “When it comes to cyber threats, they follow the money, making banks and financial institutions a big target.” Indeed, the consequences of such attacks are not limited to individual victims but have the potential to disrupt entire economies.
In 2024, cyber threats ranked as the second most concerning issue for global banks, just behind inflation and rising interest rates, according to research firm GlobalData. However, there is a growing disconnect between the magnitude of these threats and the resources allocated to combat them. Many institutions face cuts in cybersecurity budgets, which could have serious long-term implications.
Despite these challenges, spending on cybersecurity continues to rise. Banks are expected to spend more than $8.5 billion globally on cybersecurity in 2024, nearly double the $4.29 billion spent in 2019. Institutions like JPMorgan and Bank of America have ramped up their efforts significantly, with annual expenditures reaching hundreds of millions of dollars to ward off attacks.
UAE’s Regulatory Landscape and Initiatives
In the UAE, the regulatory environment is evolving in response to these risks. Mohammed Al Kuwaiti, Chairman of the UAE Cybersecurity Council, has announced that the executive regulations for a new encryption law, aimed at establishing key standards for data transmission security, are expected to be finalized by the end of the year. This move aligns the UAE’s cybersecurity infrastructure with the rapidly advancing global technological landscape, particularly in preparation for the challenges posed by quantum computing.
Quantum computing, while still in its nascent stages, poses a serious threat to the financial services industry. Experts warn that as quantum computing advances, current encryption methods could become obsolete. David Boast, managing director at Endava, points out that quantum computers will be capable of dismantling the secure firewalls and encryption banks use today.
The UAE’s proactive approach to regulating and preparing for these emerging technologies reflects a deep understanding of the cybersecurity challenges ahead. As quantum computing inches closer to becoming a reality, post-quantum cryptography algorithms, which are resistant to the power of quantum computing, will be essential for protecting financial data.
The Cost of Cybersecurity Breaches
The financial costs of a data breach in the financial sector are significant. According to IBM’s 2024 report, the average data breach cost in the financial sector exceeds $6 million, making it the second-most expensive industry after healthcare. In the UAE, the financial sector's heavy reliance on digital banking makes it particularly vulnerable, as cyber attackers target institutions integral to the economy.
For banks in the UAE, the focus on cybersecurity must not only address technological solutions but also ensure that clients are educated on the risks. Abdelaal of Mashreq Bank underscores the importance of client-side security, warning that even the most robust firewalls can be breached by simple user errors such as clicking on phishing links.
In conclusion, the financial sector’s cybersecurity battle is far from over. For UAE banks and financial institutions, the stakes are high, and the cost of inaction could be devastating. As cybercriminals continue to evolve, so too must the strategies employed by banks to defend against them. Investing in cutting-edge technologies, regulatory preparedness, and client education will be key to mitigating these risks and securing the future of finance in the UAE.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
In recent months, the AI industry has been under scrutiny, especially with advancements in artificial intelligence capabilities that utilize large language models. One of the notable players in this field, OpenAI, has found itself at the center of a legal storm, facing allegations of copyright infringement from prominent authors. These authors, including Pulitzer Prize winners and other bestselling writers, claim that OpenAI's models, like ChatGPT, have been trained on their copyrighted works without proper authorization or compensation. However, OpenAI has firmly denied these allegations, arguing that its practices comply with fair use principles and are integral to the technological innovation that drives the AI industry forward.
Background of the Allegations
The core of the controversy lies in how OpenAI trains its large language models. These models require vast amounts of text data to learn language patterns, syntax, semantics, and the ability to generate coherent and contextually relevant responses. According to the complaints filed, these data sets allegedly include books, articles, and other written works protected by copyright laws. Notable authors, including George R.R. Martin and John Grisham, have filed lawsuits, arguing that OpenAI's use of their literary works constitutes a direct infringement of their exclusive rights to reproduce and distribute their content.
OpenAI's Defense: Fair Use and Technological Innovation
In response to these allegations, OpenAI has mounted a robust defense, citing the doctrine of fair use as a legal shield. The company argues that the use of copyrighted texts in training its models constitutes a transformative use, which is a key factor in fair use analysis. OpenAI claims that the AI does not replicate or replace the original works but instead uses them to learn general language principles, which can then be applied to a wide range of tasks, from answering questions to creative writing prompts.
OpenAI’s spokesperson highlighted that the AI-generated outputs are not simple reproductions of the original texts. Rather, they are new creations that may be inspired by or reflect patterns learned from the training data. This transformative nature, OpenAI argues, places their use within the bounds of fair use, a concept embedded in U.S. copyright law to allow for new and innovative works that benefit society.
Moreover, OpenAI underscores the importance of AI development and innovation. The company believes that restrictive interpretations of copyright law that hamper the development of AI technologies could stifle creativity and technological progress. They argue that the benefits of AI, which include applications in healthcare, education, and other critical sectors, far outweigh the concerns posed by these lawsuits.
The Authors' Concerns: Protecting Creative Rights
On the other side of the argument, authors express concern about the potential erosion of their intellectual property rights. They argue that if companies can freely use their copyrighted works to train AI models without compensation or authorization, it could undermine the incentive structure that underpins the creative industry. Authors emphasize the need for a legal framework that protects their rights while balancing the interests of technological innovation.
The lawsuits filed against OpenAI not only seek monetary damages but also call for greater transparency in how AI companies use copyrighted materials. They advocate for mechanisms that would ensure authors are compensated for the use of their works in training AI systems, akin to the royalties they receive for other types of usage.
Legal Landscape and Potential Implications
The outcome of these lawsuits could have far-reaching implications for the AI industry and copyright law. If the courts rule in favor of the authors, it could set a precedent requiring AI companies to obtain licenses or permissions before using copyrighted works for training purposes. This could increase costs and regulatory requirements for developing AI technologies. Conversely, a ruling in favor of OpenAI could affirm the applicability of fair use in AI training, providing a legal framework that supports the continued growth and innovation of AI technologies.
The cases also raise broader questions about the balance between protecting intellectual property rights and promoting technological advancement. As AI continues to evolve and integrate more deeply into various sectors, the need for clear legal guidelines becomes more pressing. The decisions made in these cases could influence future legislation and policies, not only in the United States but globally, as other countries grapple with similar issues.
Conclusion
As the legal battle unfolds, both sides present compelling arguments. OpenAI's defense hinges on the transformative nature of AI and the broader societal benefits of technological progress, while authors focus on protecting their creative rights and ensuring fair compensation. The outcome of these cases will likely shape the future of AI development and the rights of content creators in the digital age.
Regardless of the verdict, it is evident that the legal, ethical, and societal implications of AI technologies require thoughtful consideration. Finding a balance that respects both the innovation brought by AI and the rights of creators is essential to fostering a future where technology and creativity can thrive together. As courts and policymakers navigate these uncharted waters, the decisions made will undoubtedly play a pivotal role in shaping the evolving landscape of intellectual property and artificial intelligence.
(The writer is a Associate specializing in Intellectual property and copywrite Law at The Law Reporters .)
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
In a groundbreaking decision, a Dubai court has recognized cryptocurrency as a legitimate form of salary payment. This move marks a significant shift towards modernizing financial systems in the UAE, reinforcing the country's reputation as a global leader in embracing cutting-edge financial technologies.
Current Payment Systems in the UAE
Traditionally, salaries in the UAE are paid in the local currency, AED, through conventional banking systems under the Wage Protection System (WPS). This ensures timely payment and compliance with local labour laws. However, with the rise of fintech solutions, digital and contactless payments are becoming more popular, and blockchain technology is increasingly adopted by government entities for enhanced security and transparency.
Benefits of Cryptocurrency for Salary Payments
Regulatory Compliance
The UAE has already established a regulatory framework for crypto activities through the Dubai Virtual Asset Regulatory Authority (VARA). Businesses paying salaries in crypto must adhere to local regulations, including anti-money laundering and reporting standards, ensuring a secure and compliant financial environment.
The Future of Crypto in the UAE
This decision positions the UAE as a leader in blockchain and fintech innovation, paving the way for increased crypto adoption across various sectors. It also enhances the UAE's appeal as a destination for tech talent and global fintech companies. The official recognition of crypto salaries can drive new financial services, such as crypto-backed loans and investment options, contributing to a more diverse and robust financial ecosystem.
The recognition of cryptocurrency as a valid salary payment method is a major milestone for the UAE's financial landscape. It underscores the country's commitment to innovation and sets a precedent for other nations considering similar moves. As the UAE continues to embrace blockchain and digital currencies, it is poised to lead the way in the global fintech revolution.
(The writer is an Associate specializing in Crypto and Employment Law at The Law Reporters)
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
The Federal Law No. 34 of 2021 (“Cybercrimes Law”) introduces significant changes to the UAE's legal framework regarding cybercrimes, replacing the previous legislation, Federal Law No. 5 of 2012. One of the key updates in the law includes the explicit use of the term "hacking," a common term in the cyber world, to describe unauthorized access to websites and electronic platforms, offering clearer provisions and stronger penalties.
Key Changes and Provisions
Article 4: IT Offences – Damage to Information Systems
Basic Penalty: Imprisonment for at least one year and/or a fine ranging from AED 500,000 to AED 3,000,000 shall apply to anyone who deliberately:
Damages
Disables
Suspends
Causes harm to an electronic system, website, or information network, as defined in the Cybercrimes Law.
If the damage or disruption affects a banking, medical, media, or scientific institution, the penalty increases to imprisonment for a minimum of 3 years and a maximum of 15 years.
Article 11: Fabrication of Mail, Websites, and False Electronic Accounts
Creating a false email, website, or electronic account that is falsely attributed to a natural or legal person will result in imprisonment and/or a fine ranging from AED 50,000 to AED 200,000.
Imprisonment of at least 2 years applies if the fabricated account, email, or website is used to harm the victim.
If a fabricated account, email, or website is falsely attributed to a state institution, the penalty is imprisonment for up to 5 years and a fine of AED 200,000 to AED 2 million.
Article 48: Consumer Protection and Misleading Promotion
Imprisonment and/or a fine of AED 20,000 to AED 500,000 for promoting or advertising misleading information, including incorrect data regarding a commodity or service.
A fine of AED 20,000 to AED 500,000 for advertising, promoting, or dealing with virtual or digital currencies not recognized by the UAE without a proper license from the competent authorities.
Article 49: Promotion of Medical Products Without Authorization
Any promotion or sale of unauthorized or counterfeit medical products online can lead to imprisonment and/or a fine, depending on the nature and extent of the violation.
Article 55: Bribery for Spreading Illegal Content or False Statements
Anyone who accepts or offers gifts or benefits in exchange for publishing illegal or false content faces imprisonment and fines of up to AED 2 million. If they supervise or manage an abusive account or website, they may face the same penalty. Additionally, authorities may designate websites as offensive if they repeatedly publish false data or illegal content.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
Saudi Justice Minister Dr Walid Al-Samaani has instructed the expansion of mobile services through the Najiz app, which will now provide 90 judicial services encompassing all judicial sectors.
The Ministry of Justice aims to improve user experience and decrease the time and effort needed to access judicial services.
The Najiz app delivers a comprehensive range of services, including judiciary, enforcement, documentation, and support functions.
The digital platform enables users to complete their transactions without needing to visit court buildings.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
X Corp. and owner Elon Musk defeated one of the lawsuits filed over the firing of thousands of employees after the billionaire’s takeover of the social media platform in October 2022.
The suit alleged that X, formerly known as Twitter, and Musk owed at least $500 million in severance pay to about 6,000 laid-off employees under provisions of the federal Employee Retirement Income Security Act (ERISA), which sets rules for benefit plans.
The two plaintiffs, the company’s former global head of compensation and benefits and another ex-manager, said workers got severance equal to only one month’s pay.
But US District Judge Trina Thompson in San Francisco ruled that the employees’ claims weren’t covered under ERISA because the company told employees after Musk’s takeover that any who were let go would only get cash payouts.
Several similar cases filed by former Twitter employees and executives are moving through the courts.
The case is McMillian v. Musk, 23-cv-03461, US District Court, Northern District of California (San Francisco).
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
The Indian government under Prime Minister Narendra Modi has intervened following a Reuters report that exposed Foxconn's practice of excluding married women from iPhone assembly jobs at its main plant in Tamil Nadu.
The Ministry of Labour and Employment has invoked the Equal Remuneration Act of 1976, which explicitly prohibits discrimination in hiring based on gender. In a statement, the ministry called for a detailed report from the Tamil Nadu Labour Department, the location of the iPhone factory in question.
Additionally, the ministry directed the Regional Chief Labour Commissioner to provide a factual report on the situation. Neither Apple nor Foxconn immediately responded to requests for comment on the government's statement. The Tamil Nadu state government also did not respond to Reuters' request for comment outside of regular office hours.
A Reuters investigation published earlier revealed that Foxconn systematically avoided hiring married women, citing reasons such as family responsibilities, pregnancy, and higher absenteeism compared to unmarried women. The Ministry of Labour noted these reports and emphasized the legal framework prohibiting such discriminatory practices.
In response to questions raised in the Reuters report, Apple and Foxconn acknowledged previous lapses in hiring practices in 2022 and stated that corrective actions had been taken. However, the discriminatory practices documented at the Sriperumbudur plant occurred in 2023 and 2024, for which Apple and Foxconn did not provide specific responses.
Apple clarified that they took immediate action in 2022 upon learning of concerns about hiring practices and had implemented monthly audits to ensure compliance with their standards across all suppliers, including Foxconn. Foxconn, on the other hand, strongly denied allegations of discrimination based on marital status, gender, religion, or any other grounds.
Legal experts cited by Reuters pointed out that while Indian law does not explicitly prohibit companies from discriminating in hiring based on marital status, both Apple and Foxconn have policies against such practices within their supply chains.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
The UAE Ministry of Economy (MoE) has signed a memorandum of understanding (MoU) with the Spanish National Professional Football League ‘La Liga’ to establish a laboratory aimed at combating piracy and protecting intellectual property rights in the UAE.
The initiative will focus on detecting and addressing the illegal use of audio and visual content across digital platforms. The project, executed in collaboration with the Telecommunications Regulatory Authority and the Digital Government (TDRA), will be established in Dubai Media City.
The MoU was signed by Abdullah bin Ahmed Al Saleh, Undersecretary of the Ministry of Economy, and Javier Tebas, President of La Liga, in the presence of Major General Dr Abdul Quddus Abdul Razzaq Al Obaidly, Assistant Commander-in-Chief for Excellence and Pioneering at Dubai Police and Chairman of the Emirates Intellectual Property Association; Abdullah Balhoul, CEO of TECOM Group; and Majid Al Suwaidi, Senior Vice
President of TECOM Group - Dubai Media City. Al Saleh emphasised the UAE's commitment to building a robust intellectual property system aligned with the best global practices.
“The UAE has established a legislative framework that is highly adaptable and competitive on both regional and international levels, enhancing its role as a premier global centre for creativity and innovation. This aligns with the 'We the UAE 2031' vision to position the country as a global hub for the new economy and a thriving society by the next decade,” he said.
“The MoU marks a significant milestone in our efforts to strengthen the comprehensive protection of intellectual property applications and creative works in the UAE. Through our collaboration with La Liga, we aim to establish frameworks for blocking websites that infringe upon intellectual property rights in the country, aligning with the best global practices.
It also focuses on strengthening the UAE’s collaborations in combating intellectual property infringements and supporting global initiatives in this field. Additionally, this new project will bolster the Ministry’s ‘InstaBlock’ initiative, which was launched in February as part of its new intellectual property system initiatives,” he added.
Javier Tebas, President of La Liga, said: "It is a historic act because we are at a moment where intellectual property in the sports industry is completely threatened. We have more than 10 years of experience in this fight around the world, which is why we know that this agreement is unique.
"This agreement is an example of how public and private authorities can understand each other and create collaborative spaces against audiovisual fraud. We are seeing with the latest resolutions that we can fight piracy with technology. The Emirates is an example to follow, a pioneer in the world and unique in this activity.
"We know that we will not only defend La Liga but also many other sports and audiovisual properties. We must defend this industry that belongs to everyone."
Abdullah Balhoul, CEO of TECOM Group, said: “Protecting intellectual property is one of the key pillars in advancing a knowledge-based economy. The UAE and Dubai have been pivotal in this effort, utilising their status as global hubs for creativity.
Through specialised business districts like Dubai Media City, the TECOM Group has created integrated business environments that attract top talent from around the world. The Group has succeeded in attracting global companies and top talent in six strategic sectors, thanks to the UAE and Dubai's state-of-the-art infrastructure, supported by legislative and regulatory frameworks that prioritise innovation and growth.
“TECOM Group’s media sector includes over 3,500 clients working within Dubai Media City, Dubai Studio City, and Dubai Production City. Our goals align with forward-looking government strategies such as ‘We the UAE 2031’ and the Dubai Economic Agenda D33. We are pleased to welcome La Liga in Dubai Media City, affirming our steadfast commitment to supporting the Ministry of Economy’s efforts to cement the UAE’s position as a leading global destination for creativity and innovation.”
Through this project, the MoE seeks to encourage investment in advanced technology and digital innovations, along with the various services offered by the laboratory.
The primary objective is to enhance the protection of intellectual and creative rights within the country, in line with the Ministry’s strategic goals of fostering leadership and competitiveness in innovation and intellectual property rights.
This initiative also aims to empower national creative talents to utilise intellectual property applications, thereby contributing to the development of a knowledge and innovation-driven national economy.
The MoE has outlined plans to complete the project within three years in collaboration with its partners. The Anti-Piracy Lab, which will be established in Dubai Media City, will be similar to La Liga’s Anti-Piracy Lab in Madrid. The lab will utilise cutting-edge technological and digital tools to detect, analyse and remove illegally used audiovisual content, adhering to industry best practices.
Dubai was chosen due to its collaborative efforts with relevant government bodies to formulate policies promoting creative industries and safeguarding intellectual property rights. The city also contributes to the development of a legal and regulatory framework that supports innovation and creativity in the media industry.
The MoE, through its ‘InstaBlock’ initiative, successfully blocked 1,117 websites that infringed upon the copyright of creative content on digital platforms during the holy month of Ramadan 2024, compared to 62 sites in Ramadan 2023.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
WikiLeaks founder Julian Assange walked free on Wednesday from a court on the US Pacific island territory of Saipan after pleading guilty to violating US espionage law, in a deal that will see him return home to Australia.
His release ends a 14-year legal saga in which Assange spent more than five years in a British high-security jail and seven years in asylum at the Ecuadorean embassy in London, battling extradition to the US, where he faced 18 criminal charges.
During the three-hour hearing, Assange pleaded guilty to one criminal count of conspiring to obtain and disclose classified national defence documents but said he had believed the US Constitution's First Amendment, which protects free speech, shielded his activities.
"Working as a journalist I encouraged my source to provide information that was said to be classified in order to publish that information," he told the court. "I believed the First Amendment protected that activity but I accept that it was ... a violation of the espionage statute."
Chief US District Judge Ramona V. Manglona accepted his guilty plea and released him due to time already served in a British jail. "We firmly believe that Mr Assange never should have been charged under the Espionage Act and engaged in (an) exercise that journalists engage in every day," his US lawyer, Barry Pollack, told reporters outside the court.
WikiLeaks' work would continue, he said. His UK and Australian lawyer, Jennifer Robinson, thanked the Australian government for its years of diplomacy in securing Assange's release.
"It is a huge relief to Julian Assange, to his family, to his friends, to his supporters and to us and to everyone who believes in free speech around the world that he can now return home to Australia and be reunited with his family," she said.
Assange, 52, left the court through a throng of TV cameras and photographers without answering questions, then waved as he got into a white SUV. He is set to leave Saipan on a private jet accompanied by Australia’s ambassadors to the US and UK, heading to the Australian capital Canberra, where they are expected to land around 7 pm (0900 GMT), according to flight logs.
Assange had agreed to plead guilty to a single criminal count, according to filings in the US District Court for the Northern Mariana Islands. The US territory in the western Pacific was chosen due to his opposition to travelling to the mainland US and for its proximity to Australia, prosecutors said.
Dozens of media from around the world attended the hearing, with more gathered outside the courtroom to cover the proceedings. Media were not allowed inside the courtroom to film the hearing.
"I watch this and think how overloaded his senses must be, walking through the press scrum after years of sensory depravation and the four walls of his high security Belmarsh prison cell," Stella Assange, the wife of WikiLeaks founder said on social media platform X.
Long Saga
Australian-born Assange spent more than five years in a British high-security jail and seven holed up in the Ecuadorean embassy in London as he fought accusations of sex crimes in Sweden and battled extradition to the US, where he faced 18 criminal charges.
Assange's supporters view him as a victim because he exposed US wrongdoing and potential crimes, including in conflicts in Afghanistan and Iraq. Washington has said the release of the secret documents put lives in danger.
The Australian government has been advocating for his release and has raised the issue with the United States several times. "This isn't something that has happened in the last 24 hours," Prime Minister Anthony Albanese told a news conference on Wednesday.
"This is something that has been considered, patient, worked through in a calibrated way, which is how Australia conducts ourselves."
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
Online fraud is a collective term for various types of malicious activities, such as phishing, identity theft, data breaches and ransomware attacks. Cybercriminals use diverse attack vectors, including malicious software, spoofed websites and elaborate phishing schemes, to trick victims into revealing personal information, financial information, or access to secure networks.
In the ever-changing digital economy of Dubai, online fraud has become a major menace for both companies and clients. The financial and operational impacts are substantial, with 42 per cent of UAE organisations reporting increased fraud within just one year.
Firms incur an average cost of Dh4.19 per dirham lost to fraud, which includes direct financial losses as well as other costs related to internal labour, external fees, interest paid and replacement costs for goods obtained through theft or loss.
Digital payments have transformed the payment landscape with improved convenience and ease in transactions, but they also expose users to new threats from cyber criminals who often target digital channels.
Across the EMEA (Europe, the Middle East and Africa) region, digital channels now account for 52 per cent of fraud losses, surpassing physical fraud for the first time.
The anonymity and speed of digital, cross-border transactions enable cybercriminals to execute untraceable fraud with alarming ease.
Moreover, the sophistication of cyber-attacks is escalating, driven by advancements in technology such as artificial intelligence (AI), which enhances the ability of criminals to exploit both consumers and businesses.
Legal Implications and Preventive Strategies
As a member of the UAE, Dubai has recognised the urgent need to safeguard its rapidly expanding digital economy and has built a strong regulatory framework to combat cybercrime.
The Federal Law No. 5 of 2012 on Combatting Cybercrimes, also known as the UAE Cybercrimes Law, is the cornerstone of this system. It provides comprehensive measures to prevent and penalise various forms of cybercrime, including online fraud. Key aspects of the UAE Cybercrimes Law include:
Article 2: Criminalises unauthorised access to electronic websites, systems, or information networks, with harsh consequences for causing damage, interference, or altering information.
Article 3: Covers crimes involving communication interception, such as hacking and eavesdropping.
Article 4: Addresses cyber forgery and prohibits the unauthorised use, alteration, or copying of data, documents, or electronic records.
Article 11: Targets internet fraud specifically, punishing offenders who unlawfully obtain property, advantages, or rights by deceit, impersonation, or fraudulent schemes with harsh fines and/or imprisonment.
The UAE has implemented specific regulations to tackle online fraud in addition to the general provisions of the Cybercrimes Law. These provisions are designed to address the unique challenges posed by digital transactions and cyber threats:
The Electronic Commerce Act (Federal Law No. 1 of 2006): Governs electronic commerce in the UAE, ensuring that digital contracts and transactions are valid and enforceable while also providing security measures to help prevent fraud through hacking.
Data Protection Legislation: Safeguards personal and sensitive information, thereby reducing the risks of identity theft and data breaches.
Payment Systems Regulations: Issued by the Central Bank of the UAE, these rules ensure the security and integrity of electronic payment systems, minimising opportunities for financial fraud.
Enforcing these laws is a critical role played by local authorities. The Dubai Police Cyber Crime Unit uses forensic tools to investigate and fight cybercrime. Enhancing cybersecurity across Dubai is the mandate of the Dubai Electronic Security Centre (DESC), whereas the Telecommunications and Digital Government Regulatory Authority (TDRA) is responsible for promoting cybersecurity awareness and initiatives.
Moreover, the UAE Cybercrimes Law provides for strict punishments, including severe fines from Dh50,000 to Dh3 million, imprisonment, or asset forfeiture.
The Dubai government has implemented several measures aimed at curbing online fraud, including awareness campaigns targeting public online risks and promoting secure internet behaviours.
Organisations like DESC prioritise technological advancements, utilising AI and blockchain technology in fraud detection and prevention efforts. AI analyses big data to identify patterns indicative of fraudulent activity, while blockchain technology offers a secure way of maintaining transaction records, guaranteeing data integrity. Imposing tough penalties on offenders helps enforce stringent cybercrime laws, thereby providing a safer internet environment.
By employing strong passwords, recognising phishing attempts, keeping software updated, and enabling two-factor authentication, individuals can protect themselves from online fraud. Businesses can mitigate risks by adopting robust cybersecurity measures, conducting regular employee training, performing security audits, and maintaining comprehensive incident response plans.
Combating online fraud is a joint responsibility of both public and private sectors. Public-private partnerships facilitate knowledge sharing on emerging threats and the most successful mechanisms for fighting counterfeits. Governments and enterprises collaborate to provide cybersecurity training programmes, engage in public awareness campaigns and develop new technologies.
International collaboration is essential since cybercrime is borderless. Cross-border cooperation encompasses intelligence-sharing, harmonisation of legal frameworks, and joint operations.
The adoption of international cybersecurity standards ensures global safeguards against online fraud, involving organisations such as INTERPOL promoting collaboration between nations and setting norms under the United Nations.
Dubai’s emerging digital economy is under serious threat of e-fraud, prompting proactive and responsive moves by regulatory authorities. The city has a strong legal framework with Federal Law No. 5 of 2012 and dedicated agencies such as the Dubai Police Cyber Crime Unit and DESC, capable of effectively prosecuting offenders.
Preventative strategies involve public sensitisation campaigns, technological developments like AI and blockchain, and international cooperation. When individuals take care in combination with organisational efforts such as training employees and implementing solid cybersecurity systems, the fight against fraudsters is fortified.
As Dubai maintains its position as a global digital hub, it emphasises the need to combat cybercrime, demonstrating its commitment to economic growth and global security standards.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
The rapid rise of e-commerce has transformed how we shop, offering unparalleled convenience and access to a global marketplace. However, this digital revolution has also opened the door to a surge in online scams.
In response to these growing concerns, the UAE has introduced a new tool called CheckMyLink to help residents and businesses distinguish legitimate websites from potential frauds.
The Growing Threat of Online Scams
According to recent statistics from the Dubai Police, online fraud cases have increased by 30 per cent over the past year. Cybercriminals have become increasingly sophisticated, creating convincing replicas of legitimate websites to steal personal and financial information from unsuspecting shoppers.
"The growth of e-commerce has been a double-edged sword," says Major General Jamal Al Jallaf, Director of the Criminal Investigation Department (CID) at Dubai Police. "While it has revolutionised the way we shop, it has also provided fertile ground for cybercriminals."
Introducing CheckMyLink
To combat this menace, the UAE government has launched CheckMyLink, a free online tool designed to verify the authenticity of websites.
Developed in collaboration with cybersecurity experts and law enforcement agencies, CheckMyLink provides a simple, user-friendly platform for checking the legitimacy of any e-commerce site before making a purchase.
“CheckMyLink is part of our broader strategy to safeguard digital transactions and build consumer confidence in online shopping,” explains Mohammed Al Kuwaiti, Head of Cybersecurity for the UAE government. “By offering a reliable method to verify websites, we aim to reduce the incidence of online fraud and protect our residents from falling victim to scams.”
How Does CheckMyLink Work?
Using CheckMyLink is straightforward. Before making a purchase or entering sensitive information on a website, users can:
The tool cross-references the entered URL against a database of known legitimate websites and flagged fraudulent sites. It also analyses factors such as the website’s SSL certificate, domain registration details, and user reviews.
Tips for Safe Online Shopping
While tools like CheckMyLink provide a valuable layer of protection, it’s crucial for consumers to remain vigilant. Here are some additional tips to help you shop safely online:
Legal Protections in the UAE
The UAE has robust legal frameworks in place to combat cybercrime. Under the UAE Cybercrimes Law (Federal Decree-Law No. 5 of 2012), those found guilty of online fraud can face severe penalties, including imprisonment and hefty fines.
"In the UAE, we take cybercrime very seriously," says Dr Ahmad bin Saif Al Awadhi, a legal expert specialising in cyber law. "Victims of online fraud have legal avenues for recourse, and our law enforcement agencies are equipped to handle such cases efficiently."
Conclusion
As online shopping continues to grow, so does the risk of encountering fraudulent websites. By leveraging tools like CheckMyLink and following best practices for online safety, consumers can enjoy the benefits of e-commerce with peace of mind.
For more information on how to protect yourself from online scams, visit the Dubai Police’s Cybersecurity Awareness page or the UAE Cybersecurity Council’s website.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
The US Supreme Court agreed to hear a bid by Nvidia to scuttle a securities fraud lawsuit accusing the artificial intelligence chipmaker of misleading investors about how much of its sales went to the volatile cryptocurrency industry.
The justices took up Nvidia's appeal made after a lower court revived a proposed class action brought by shareholders in California against the company and its CEO Jensen Huang.
The suit, led by the Stockholm, Sweden-based investment management firm E. Ohman J:or Fonder AB, seeks unspecified monetary damages.
Santa Clara, California-based Nvidia is a high-flying company that has become one of the biggest beneficiaries of the AI boom, and its market value has surged.
In 2018, Nvidia's chips became popular for cryptomining, a process that involves performing complex math equations in order to secure cryptocurrencies like bitcoin.
The plaintiffs in a 2018 lawsuit accused Nvidia and top company officials of violating a US law called the Securities Exchange Act of 1934 by making statements in 2017 and 2018 that falsely downplayed how much of Nvidia's revenue growth came from crypto-related purchases.
Those omissions misled investors and analysts who were interested in understanding the impact of cryptomining on Nvidia's business, the plaintiffs said.
US District Judge Haywood Gilliam Jr. dismissed the lawsuit in 2021 but the San Francisco-based 9th US Circuit Court of Appeals in a 2-1 ruling subsequently revived it.
The 9th Circuit found that the plaintiffs had adequately alleged that Huang made "false or misleading statements and did so knowingly or recklessly," allowing their case to proceed.Nvidia urged the justices to take up its appeal, arguing that the 9th Circuit's ruling would open the door to "abusive and speculative litigation."
Nvidia in 2022 agreed to pay $5.5 million to US authorities to settle charges that it did not properly disclose the impact of cryptomining on its gaming business.
The justices agreed on June 10 to hear a similar bid by Meta's Facebook to dismiss a private securities fraud lawsuit accusing the social media platform of misleading investors in 2017 and 2018 about the misuse of its user data by the company and third parties. Facebook appealed after a lower court allowed a shareholder lawsuit led by Amalgamated to proceed.
The Supreme Court will hear the Nvidia and Facebook cases in its next term, which begins in October.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
For the first time in Dubai, 'Chief AI (Artificial Intelligence) Officers' have been appointed across 22 government entities. The initiative was approved by Sheikh Hamdan bin Mohammed bin Rashid Al Maktoum, Crown Prince of Dubai and Chairman of The Executive Council.
In a post on his X account, Sheikh Hamdan stated that these appointments are "part of a future-driven vision focused on utilising AI in government work. This move will support Dubai’s journey and expertise, and transform its horizons in developing innovative solutions built on advanced technology".
He added: "The acceleration of AI, its tools and applications is a key pillar of the vision of His Highness Sheikh Mohammed bin Rashid Al Maktoum (Vice President and Prime Minister of the UAE and Ruler of Dubai) to position Dubai as a global hub in developing and deploying AI solutions".
Sheikh Hamdan concluded: "The appointment of the new Chief AI Officers in the Dubai government is a step towards achieving our vision for the future of government work, in line with the Dubai Universal Blueprint for AI (DUB.AI). We expect them to transform our vision into reality by accelerating the work, and doubling down on our efforts".
The Chief AI Officer position was established under DUB.AI, designed to enrich the quality of life and well-being of residents. Additionally, it supports Dubai's endeavour to become the most future-ready city, consolidating its leadership as a global hub for technology and innovation.
DUB.AI aims to cement the emirate’s position as a global hub for AI governance and legislation, while facilitating AI adoption across strategic sectors. Furthermore, the initiative bolsters Dubai's standing in the Global AI Readiness Index, where it presently holds a position in the top 10.
The newly appointed Chief AI Officers represent several government entities across Dubai, including: Community Development Authority in Dubai, Dubai Government Human Resources Department, Dubai Customs, Dubai Police, The Judicial Council, Dubai Civil Aviation Authority, Mohammed Bin Rashid Housing Establishment, Dubai Electricity and Water Authority, Digital Dubai Authority, General Directorate of Civil Defence in Dubai, Dubai Data and Statistics Establishment, Dubai Health Authority, Public Prosecution, Protocol Department in Dubai, Dubai’s Roads and Transport Authority, Dubai Culture & Arts Authority, Hamdan Bin Mohammed Smart University, Dubai Department of Economy and Tourism, Dubai Corporation for Ambulance Services, Department of Finance in Dubai, Endowments and Minors’ Trust Foundation (Awqaf Dubai), and Dubai Municipality.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
The US Justice Department and the Federal Trade Commission (FTC) have reached an agreement that paves the way for potential antitrust investigations into Microsoft, OpenAI and Nvidia, according to a source familiar with the matter.
The move reflects growing regulatory scrutiny over the concentration of power in the artificial intelligence (AI) industry. Microsoft and Nvidia, both dominant players in their respective fields, are among the world's largest companies by market capitalisation, with Nvidia's market value recently surpassing $3 trillion.
Antitrust enforcers in the US have raised several concerns about AI, including the advantage Big Tech companies have due to their extensive access to data for training AI models, the impact of generative AI on the market for creative work and the potential for companies to use partnerships to bypass merger review processes.
This new division of responsibilities between the DOJ and FTC mirrors a 2019 agreement to split enforcement efforts against Big Tech, which led to the FTC pursuing cases against Meta and Amazon, and the DOJ suing Apple and Google. These cases are ongoing, and the companies have denied any wrongdoing.
Although OpenAI's parent organisation is a non-profit, Microsoft's $13 billion investment in a for-profit subsidiary gives it a significant stake. Under the new agreement, the Justice Department will investigate Nvidia for potential antitrust violations, while the FTC will examine the conduct of OpenAI and Microsoft. This agreement, reached over the past week, is expected to be finalised in the coming days.
Nvidia holds approximately 80 per cent of the AI chip market, including custom AI processors made for cloud computing companies like Google, Microsoft, and Amazon. This market dominance allows Nvidia to report gross margins between 70 per cent and 80 per cent. Spokespersons for Nvidia and OpenAI declined to comment on the regulatory agreement, while Microsoft stated it takes its legal obligations seriously and is confident it has complied with them.
In January, the FTC ordered OpenAI, Microsoft, Alphabet, Amazon and Anthropic to provide information on recent investments and partnerships involving generative AI companies and cloud service providers. Additionally, the FTC launched an investigation into OpenAI in July last year over claims it had violated consumer protection laws by putting personal data and reputations at risk.
Last week, DOJ antitrust chief Jonathan Kanter expressed concerns at a Stanford University AI conference about the structures and trends in AI, highlighting that the technology's reliance on massive amounts of data and computing power can give dominant firms a substantial advantage. The DOJ and FTC, led by Chair Lina Khan, share jurisdiction over federal competition law and aim to avoid duplicative investigations.
Bill Baer, a former antitrust leader at both agencies, noted that each agency typically leads in areas where it has the most expertise, though occasionally, the heads of both agencies will decide on the division of responsibilities.
Additionally, the FTC is investigating Microsoft's $650 million deal with AI startup Inflection AI, scrutinising whether the deal was an attempt to circumvent merger disclosure requirements. The agreement, made in March, allowed Microsoft to use Inflection's models and hire most of the startup's staff, including its co-founders. Microsoft stated that this deal helped accelerate its work on Microsoft Co-pilot while allowing Inflection to continue pursuing its independent business goals as an AI studio.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
In a significant legal victory for Google, the US District Court in Northern California has dismissed a class action lawsuit alleging that the tech giant improperly used personal data to train its artificial intelligence (AI) systems.
The ruling is seen as a reprieve for Google amid increasing scrutiny over its data practices and the potential implications for the broader tech industry. The class action lawsuit was initiated by a group of plaintiffs who claimed that Google violated privacy laws and infringed upon their personal rights by using their private information to enhance the capabilities of its AI algorithms.
The plaintiffs argued that Google's data usage practices were opaque and lacked consent, seeking compensation and stricter regulations to prevent such occurrences in the future. Google, however, maintained that its data practices were transparent and complied with existing legal frameworks. The company asserted that data anonymisation and aggregation techniques were employed to protect individual privacy.
Judge James Donato, presiding over the case, ruled in favour of Google, dismissing the class action on the grounds that the plaintiffs failed to demonstrate concrete harm or a direct violation of privacy laws. The court found that the plaintiffs lacked standing as they did not provide sufficient evidence to prove that Google's actions caused specific injuries to individuals.
Furthermore, the ruling emphasised that Google's data practices were consistent with its user agreements and privacy policies. The court also noted that the plaintiffs' claims were too broad and did not specify how individual plaintiffs were uniquely affected.
While the dismissal of the case is a significant win for Google, the plaintiffs have the option to amend their complaint and present a more detailed case. Additionally, the ruling does not preclude future lawsuits on similar grounds, particularly as public and regulatory scrutiny on data privacy continues to intensify.
For Google, the decision alleviates legal pressures but also underscores the need for clearer communication and stricter adherence to data privacy standards. The tech industry at large is watching closely, as this case sets a precedent for how AI training data cases might be handled in the future.
As AI technologies continue to advance, the balance between innovation and privacy remains a contentious issue. Governments and regulatory bodies worldwide are increasingly focused on updating and enforcing data protection laws. Companies like Google will need to navigate these regulations carefully to maintain public trust and avoid legal pitfalls.
For now, Google can celebrate a temporary victory, as a court dismisses a lawsuit alleging illegal data collection by its AI chatbot Bard. However, this is unlikely to be the last legal challenge the tech industry faces in the realm of artificial intelligence and data privacy.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
Google parent Alphabet must face a lawsuit worth up to 13.6 billion pounds ($17.4 billion) for allegedly abusing its dominance in the online advertising market, London’s Competition Appeal Tribunal (CAT) has ruled.
The lawsuit, which seeks damages on behalf of publishers of websites and apps based in the United Kingdom, is the latest case to focus on the search giant’s business practices.
Ad Tech Collective Action is bringing the claim on behalf of publishers who say they have suffered losses due to Google’s allegedly anti-competitive behaviour. Google last month urged the CAT to block the case, which it argued was incoherent. The company “strongly rejects the underlying allegations”, its lawyers said in court documents.
The CAT said in a written ruling that it would certify the case to proceed towards a trial, which is unlikely to take place before the end of 2025. The tribunal also emphasised the test for certifying a case under the UK’s collective proceedings regime – which is roughly equivalent to the United States’ class action regime – is relatively low.
Ad Tech Collective Action’s case comes amid ongoing probes by regulators into Google’s adtech business, including by Britain’s Competition and Markets Authority and the European Commission. Google is also fighting two lawsuits in the United States, one brought by the Department of Justice and another by Texas and other states, accusing the company of anti-competitive conduct.
Google’s lawyers said in documents for the CAT case that the company’s “impact in the ad tech industry has been hugely pro-competitive”. CAT’s decision is the latest against a tech giant to be given the green light by the CAT, which already this year has certified a $3.8 billion case against Facebook parent Meta and a nearly $1 billion case against Apple.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
McDonald's no longer has the exclusive right to use the label "Big Mac" in reference to chicken burgers sold in the European Union after a ruling by the EU's highest court. The American fast-food chain popularised the nickname for large burger sandwiches, registering it as a trademark in the EU in 1996.
But following a legal challenge from Supermac's, a rival chain in Ireland, other companies will now be free to use the name "Mac" to sell poultry products or in their chains' names. The European Court of Justice found that McDonald's could not show it had made genuine use of the trademark for a continuous period of five years.
"McDonald's loses the EU trade mark 'Big Mac' in respect of poultry products," the judges ruled. McDonald's noted in a statement that the court's decision did not affect its right to use the "Big Mac" trademark. But it does open the door for other chains to use the name, including Supermac, the firm that brought the challenge.
Supermac, founded in 1978 in Galway, sells beef and chicken burgers and chicken nuggets at 120 red and white branded outlets across Ireland. It has been embroiled in a seven-year legal battle with the US chain over the right to use brand terms including "Mac".
Supermac's managing director, Pat McDonagh, said the ruling displayed a "common-sense approach to the use of trademarks by large multinationals". Supermac's accuses McDonald's of "bullying" smaller firms through the defence of its trademarks, aiming to stifle competition.
The dispute goes back to 2017 when McDonald's blocked Mr McDonagh from registering Supermac's as a trademark, to pave the way for expansion outside Ireland. McDonagh countered that McDonald’s was not using its trademark for restaurants, so other firms should not be blocked from using the term "Mac" in their names.
"We knew when we took on this battle that it was a David versus Goliath scenario," McDonagh said. "We wholeheartedly welcome this judgement as a vindication of small businesses everywhere that stand up to powerful global entities."
McDonald's said: "Our iconic Big Mac is loved by customers all across Europe, and we’re excited to continue to proudly serve local communities, as we have done for decades." The chain did not say whether it planned to appeal against the decision.
The ECJ’s ruling revokes McDonald’s trademark for restaurants and for poultry products, retaining it only in reference to the red-meat burgers it originally referred to.
Supermac’s remains in dispute with McDonald’s over the trademark in the UK, since post-Brexit EU trademark law no longer applies in the UK.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
The Economic Integration Committee held its third meeting of 2024, chaired by Abdulla bin Touq Al Marri, Minister of Economy, and attended by Dr Thani bin Ahmed Al Zeyoudi, Minister of State for Foreign Trade, along with representatives from local economic development departments across all emirates.
The Committee reviewed the progress on the implementation of its previous meeting's agenda from March, discussing several crucial topics. A key focus was enhancing national efforts to improve trademark registration in the UAE in line with global best practices.
Abdulla bin Touq Al Marri stated: “In accordance with the directives of our wise leadership, the UAE has made significant strides towards fostering an exemplary legislative and economic framework, adhering to the highest global standards. This advancement is evident in the implementation and refinement of diverse policies and regulations across vital economic sectors, particularly those pertaining to emerging sectors like technology, innovation, intellectual property and trademarks.
Notably, the UAE has been named the premier global destination for initiating and conducting new economic ventures, according to the 2024 Global Entrepreneurship Monitor (GEM) report. This recognition aligns with the 'We the UAE 2031' vision, which aims to position the UAE as a compelling and influential economic hub within the next decade.”
Bin Touq emphasised the importance of the Economic Integration Committee and local economic development departments in supporting national efforts to enhance and update competitive and flexible economic laws and policies.
These efforts play a crucial role in supporting the UAE’s vision of transitioning to a knowledge-based and innovative new economic model. Additionally, they will attract foreign direct investments and instill confidence in investors, businessmen and capital owners within the national economy.
He highlighted the significant economic growth achieved by the UAE under the vision and guidance of the wise leadership in 2023. These accomplishments include a 3.6 per cent growth in GDP at constant prices from 2022 to Dh1.68 trillion. Furthermore, the non-oil GDP at constant prices reached Dh1.25 trillion, growing by 6.2 per cent compared to 2022.
These figures solidify the UAE’s position as the fifth-largest economy globally in terms of real GDP growth. Additionally, the UAE has been ranked first in the region and 18th globally in the World Economic Forum's Travel and Tourism Development Index (TTDI) 2024, climbing seven places from its 25th global ranking in 2019.
Last week, the UAE signed an Economic Partnership Agreement with South Korea, marking the beginning of a new era of economic growth and promoting positive collaboration across various sectors such as trade, investment, and economy. This agreement aims to foster constructive cooperation with one of the world’s strongest economies.
The Committee reviewed the progress made in developing the National Economic Registry, utilising the latest technological solutions and artificial intelligence. The registry consists of two phases: the first links data from local licences issued by UAE emirates to companies and institutions and the second links data from licences issued by free zones to companies and institutions.
It will also integrate data of all licence types from all registration authorities in the UAE and free zones. Once complete, the registry will provide an integrated database of companies registered in the country, aligning with best practices and legislations and developing sectoral economic policies based on comprehensive, precise and continuous data.
The Committee further reviewed the UAE’s efforts to fortify the trademark registration and protection system, considering the legislations implemented in alignment with best standards. These efforts play a pivotal role in enhancing the UAE's attractiveness to trademark-related investments and advancing the growth of its products in Emirati markets, ultimately enhancing the reputation of the national economy.
Notably, the total number of registered trademarks, owned by both local and international companies, has reached an impressive figure of 216,937.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
The UAE has taken significant strides in protecting intellectual property rights (IPR) by blocking over 1,000 illegal websites this year for violating cyber laws. These sites, which illegally broadcasted entertainment content owned by various media networks, were primarily blocked during Ramadan, a period marked by high demand for multimedia content.
According to Dr Abdulrahman Hassan Al Muaini, Assistant Undersecretary for the Intellectual Property Rights Sector at the Ministry of Economy (MoE), “since the implementation of the ‘InstaBlock’ initiative during the holy month of Ramadan, we have blocked a total of 1,117 websites that infringed upon intellectual property rights.”
This is a marked increase from 2023, when only 62 sites were blocked, underscoring the UAE's enhanced approach to IPR protection.
Types of Cybercrimes and Penalties in the UAE
Unauthorised Access: Gaining unauthorised access to computer systems or networks is a serious offense in the UAE. This includes hacking into systems to steal data or disrupt operations. Penalties for unauthorized access can include imprisonment, fines, or both, depending on the severity of the offense.
Hacking: Hacking involves breaking into computer systems or networks without permission, often to steal or manipulate data. In the UAE, hacking is met with severe penalties, including long-term imprisonment and hefty fines, aimed at deterring such malicious activities.
Phishing: Phishing refers to fraudulent attempts to obtain sensitive information such as usernames, passwords, and credit card details by masquerading as a trustworthy entity in electronic communications. Phishing activities are punishable by imprisonment and significant fines, reflecting the serious nature of this cybercrime.
Cyber Fraud: Cyber fraud encompasses various deceptive practices carried out online, including identity theft, online scams and financial fraud. The penalties for cyber fraud in the UAE are stringent, including imprisonment and substantial fines, to protect individuals and businesses from financial losses and reputational damage.
Dissemination of Malicious Software
The creation, distribution, or use of malicious software (malware) to harm computer systems, steal data, or disrupt operations is strictly prohibited in the UAE. Offenders can face severe penalties, including long-term imprisonment and substantial fines, to curb the spread of malware and protect cybersecurity.
For a more detailed understanding of these cybercrimes and the specific penalties associated with each, you can read more here.
IPR in UAE: Cornerstone of Innovation and Economic Growth
IPR serves as a cornerstone in protecting creative expressions, technological advancements, and unique brands, fostering innovation and economic growth. In the UAE, the legal framework for IPR encompasses Copyrights, Trademarks, and Patents, each playing a crucial role in safeguarding the rights of creators, inventors, and businesses.
Copyrights in the UAE
Governed by Federal Decree-Law No. 38/2021, copyright protection in the UAE grants protection to innovative literary, artistic, and scientific creations. Key aspects include:
Definition of Authorship and Joint Authorship: Recognises individuals who create copyrightable works and allows creators of all ages to register their works.
Authorisation for Use of the Work: Copyright owners have exclusive rights over their works and can delegate rights management to professional associations.
Copyright Registration Process: Overseen by the Ministry of Economy’s Department of Copyright, the process is efficient and user-friendly.
Scope of Copyrightable Works: Includes a wide range of creative works such as literary works, software, audio and video creations, and more.
Rights Enjoyed by Copyright Owners: Includes economic and moral rights, lasting 50 years after the author's death.
Penalties for Infringement: Strict penalties for violations, including imprisonment and fines.
Trademarks in the UAE
Trademark protection is governed by Federal Decree-Law No. 36/2021, providing a robust framework for the registration, protection, and enforcement of trademarks. Key aspects include:
Definition of Trademark: Includes signs, names, words, symbols, and more that distinguish goods or services.Trademark Registration Process:Managed by the Ministry of Economy, the process is accessible and covers multiple categories.
Trademark Protection Period and Renewal: Trademarks are protected for ten years and can be renewed.
Cancellation and Disputes: These can be brought before the Competent Court or resolved through the Trademarks Grievances Committee.
Assignment, Transfer, and Licensing: Trademarks can be assigned, transferred, or licensed.
Patents in the UAE
Patent protection is governed by Federal Law No. 11/2021, ensuring the protection of intellectual property rights related to inventions. Key aspects include:
Patent Validity and Examination: Requires formal and substantive examinations for novelty, inventive steps, and industrial applicability.
Patentability Requirements: Inventions must meet specific criteria and certain categories are excluded.
Patent Registration Process: Involves application submission, fee payment, and compliance with regulations.
Rights and Duration: Patents are protected for twenty years from the application filing date.
Patent Licensing and Transfer: Can be licensed or transferred to others, subject to registration.
Enhancing IPR Protection: Initiatives and Technologies
The UAE's proactive stance on IPR protection, highlighted by the significant increase in blocked websites, is part of a broader strategy to foster a secure and fair digital environment.
The 'InstaBlock' initiative provides a specialized instant response service for copyright infringement complaints, demonstrating the ministry's capability to act swiftly and decisively. Additionally, tools like 'LiveBan' are designed to handle live online broadcasting infringements.
By leveraging advanced technologies and a robust legal framework, the UAE aims to safeguard the interests of content creators and media networks, ensuring their works are protected from unauthorised use and distribution. This approach helps preserve the economic value of creative works and promotes a culture of respect for intellectual property rights.
Conclusion
The UAE's efforts to block over 1,000 illegal websites this year, particularly during Ramadan, underscore the country's commitment to intellectual property protection. The significant increase in blocked sites compared to last year highlights the effectiveness of the comprehensive measures implemented by the Ministry of Economy.
These efforts are crucial in maintaining a fair and secure digital ecosystem, protecting the rights of content creators, and promoting the legal and ethical consumption of multimedia content.
The UAE’s robust legal framework for Copyrights, Trademarks and Patents continues to foster innovation, creativity and economic growth, reinforcing its position as a global hub for creativity and the knowledge economy.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
Dubai has unveiled the world’s pioneering and most extensive Artificial Intelligence (AI) prompt engineering training initiative. Dubbed ‘One Million Prompters’, the ground-breaking programme sets out to enhance the skills of one million individuals in prompt engineering within the next three years.
The announcement of this initiative was made by His Highness Sheikh Hamdan bin Mohammed bin Rashid Al Maktoum, Crown Prince of Dubai, Chairman of the Executive Council of Dubai, and Chairman of the Board of Trustees of the Dubai Future Foundation (DFF).
“With this global initiative, overseen by DFF, we aim to prepare, develop, and empower competencies with the skills needed to harness the potential of AI applications to advance innovation, progress, and economic growth,” His Highness said, stressing that keeping pace with technology trends is key to the success of governments and societies.
His Highness Sheikh Hamdan added: “We are experiencing a tremendous acceleration in technological progress, which requires new skills in labour markets. Coding was formerly in demand, but today, prompt engineering has become one of the most promising skills.”
His Highness said: “We want to be the most future-ready city and to continue preparing for the AI era by developing expertise and skills that support global technological transformation, placing Dubai at the forefront of innovation.”
‘One Million Prompters’ was launched in line with the Dubai Universal Blueprint for Artificial Intelligence, aiming to accelerate the adoption of AI applications.
It is the first-of-its-kind prompt engineering initiative to develop expertise and competencies in AI prompt engineering, which involves crafting precise and effective instructions for AI systems to achieve desired outcomes in various tasks, ranging from generating creative content to solving complex challenges.
Accredited Certifications
The initiative will follow an extensive programme that includes training courses to upskill individuals in AI and prompt engineering, offering them accredited certifications to validate their expertise and help them stand out. Additionally, it will host various competitions and provide a platform for talents to network and collaborate with experts across the technology ecosystem.
The announcement was made as His Highness Sheikh Hamdan attended the final round of the Global Prompt Engineering Championship, the world’s biggest AI prompt engineering challenge. The championship concluded on Tuesday after two days of contests in which participants competed for total prizes of Dh1 million. It was organised by the Dubai Centre for Artificial Intelligence (DCAI) and overseen by DFF.
In the presence of Her Highness Sheikha Latifa bint Mohammed bin Rashid Al Maktoum, Chairperson of the Dubai Culture and Arts Authority (Dubai Culture) and Member of the Dubai Council, His Highness Sheikh Hamdan honoured the Global Prompt Engineering Championship's winners across three categories: Coding, Literature and Art.
The winner of the Coding category was Ajay Cyril from India. Megan Fowkes from Austria won the Art segment, while Aditya Nair from India was victorious in the Literature category.
His Highness Sheikh Hamdan directed that the second edition of the Global Prompt Engineering Championship, to be held next year, should be expanded to include more categories in areas such as software, videos and other key fields.
His Highness said: “This global championship witnessed outstanding performances from some of the world's most promising talents in prompt engineering. We look forward to attracting a greater number of competitors from around the world, across new categories and sectors. We aim for this competition to become an annual global platform that empowers talents and highlights the importance of cooperation among stakeholders to shape a better future for societies through technological progress.”
Also attending the event were Omar Sultan Al Olama, Minister of State for Artificial Intelligence, Digital Economy and Remote Work Applications; Hala Badri, Director-General of the Dubai Culture and Arts Authority (Dubai Culture); Mohammed Ali Rashid Lootah, President and CEO of Dubai Chambers; Khalfan Juma Belhoul, CEO of the Dubai Future Foundation; and Saeed Mohammed Al Gergawi, Vice President of the Dubai Chamber of Digital Economy.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
OpenAI announced it would pull one of the ChatGPT voices named ‘Sky’ after it created controversy for its resemblance to the voice of actress Scarlett Johansson in ‘Her’, a movie about artificial intelligence.
“We’ve heard questions about how we chose the voices in ChatGPT, especially Sky,” the Microsoft-backed company posted on X. “We are working to pause the use of Sky while we address them.”
The 2013 sci-fi film ‘Her’ is about a man who falls in love with an artificial intelligence system named Samantha, voiced by Johansson.
The news comes one week after OpenAI debuted a range of audio voices for ChatGPT, its viral chatbot, a new AI model called GPT-4o, and a desktop version of ChatGPT.
Users watching the live demonstration of ChatGPT’s audio capabilities immediately began to post on social media that the ‘Sky’ voice sounded like Johansson in the movie. OpenAI CEO Sam Altman seemingly referenced the film in a post on X, simply writing “her.”
In a Sunday blog post, OpenAI wrote that the chatbot’s five voices -- Breeze, Cove, Ember, Juniper and Sky -- were selected through a casting and recording process that spanned five months. Casting professionals received about 400 submissions from voice and screen actors and whittled that number down to 14, according to the company. Then an internal team selected the final five.
“Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice,” the company wrote. “To protect their privacy, we cannot share the names of our voice talents.”
OpenAI plans to test Voice Mode in the coming weeks, with early access for paid subscribers to ChatGPT Plus, according to recent blog posts, and it also plans to add new voices.
OpenAI also said the new model can respond to users’ audio prompts “in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time in a conversation.”
The company, founded in 2015, has been valued at more than $80 billion by investors. It’s under pressure to lead the generative AI market while finding ways to make money as it spends massive sums on processors and infrastructure to build and train its models.
OpenAI, Microsoft and Google are at the helm of a generative AI gold rush as companies in seemingly every industry race to add AI-powered chatbots and agents to avoid being left behind by competitors.
Earlier this month, OpenAI rival Anthropic announced its first enterprise offering and a free iPhone app.
A record $29.1 billion was invested across nearly 700 generative AI deals in 2023, an increase of more than 260 per cent from the prior year, according to PitchBook. The market is predicted to top $1 trillion in revenue within a decade.
In last week’s live presentation, OpenAI team members demonstrated ChatGPT’s audio capabilities. For example, the chatbot was asked to help calm someone before a public speech.
OpenAI researcher Mark Chen demonstrated the model’s ability to tell a bedtime story and asked it to change the tone of its voice to be more dramatic or robotic.
He even asked it to sing the story. The team also asked it to analyse a user’s facial expression to comment on the emotions the person may be experiencing.
“Hey there, what’s up? How can I brighten your day today?” ChatGPT’s audio mode said when a user greeted it.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
WikiLeaks' founder Julian Assange's battle to avoid extradition to the United States received a huge boost on May 20 when London's High Court ruled that US assurances over his case were unsatisfactory and he would get a full appeal hearing.
In March, the High Court provisionally gave Assange, 52, permission to appeal on three grounds. But it gave the US the opportunity to provide satisfactory assurances that it would not seek the death penalty and would allow him to seek to rely on a First Amendment right to free speech in a trial.
In a short ruling, two senior judges said the US submissions were not sufficient and said they would allow the appeal to go ahead.
Assange has been indicted on 17 espionage charges and one charge of computer misuse over his website’s publication of a trove of classified US documents almost 15 years ago.
At a hearing on Monday, the two judges granted permission to appeal. This means Assange will now be able to bring an appeal at the High Court in London.
Assange has been engaged in a 12-year legal battle to avoid extradition from the UK. A large crowd gathered outside the High Court ahead of the decision.
He was not in court for the hearing but his wife Stella, with whom he has two children aged five and seven, was present to hear the decision.
The WikiLeaks founder fled to the Ecuador embassy in London in 2012 while he was facing extradition to Sweden, where he was being investigated after a rape allegation was made against him two years earlier.
He has been battling extradition to the US since 2019 and is currently being held in the maximum-security Belmarsh Prison in London.
Assange’s lawyer Edward Fitzgerald said judges should not accept the assurance given by US prosecutors that he could seek to rely upon the rights and protections given under the First Amendment, as a US court would not be bound by this.
"We say this is a blatantly inadequate assurance," he told the court. Fitzgerald accepted a separate assurance that Assange would not face the death penalty, saying the US had provided an "unambiguous promise not to charge any capital offence".
The US government says Assange’s actions went way beyond those of a journalist gathering information, amounting to an attempt to solicit, steal and indiscriminately publish classified government documents.
James Lewis, representing the US authorities, said Assange’s conduct was “simply unprotected” by the First Amendment.
“No one, neither US citizens nor foreign citizens, are entitled to rely on the First Amendment in relation to publication of illegally obtained national defence information giving the names of innocent sources, to their grave and imminent risk of harm,” he told the court.
US prosecutors allege that Assange encouraged and helped US Army intelligence analyst Chelsea Manning to steal diplomatic cables and military files that WikiLeaks published.
Assange's lawyers say he could face up to 175 years in prison if convicted, though US authorities have said any sentence would likely be much shorter.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
In a significant move aimed at regulating digital media and enhancing transparency in the UAE's media landscape, the Federal Decree-Law No. 55 of 2023 has been enacted.
This comprehensive legislation marks a pivotal moment in the evolution of media governance, setting clear guidelines for digital media activities and outlining the responsibilities of media practitioners and platforms.
The proliferation of digital media platforms and the widespread dissemination of information online have led to the need for robust regulation to safeguard public interest, uphold journalistic standards and combat misinformation.
Recognising this imperative, the UAE government embarked on drafting legislation to address the unique challenges posed by the digital media landscape.
Key Provisions of Federal Decree-Law No. 55/2023
Registration Requirement for Digital Media Outlets: One of the cornerstone provisions of the decree-law is the requirement for digital media outlets to register with the relevant authorities. This registration process aims to ensure accountability and transparency in the digital media sector, enabling authorities to monitor the activities of media entities operating within the UAE.
Editorial Responsibility and Professional Standards: The decree-law underscores the importance of upholding editorial responsibility and adherence to professional standards in digital media content production. Media practitioners are required to adhere to principles of accuracy, fairness and objectivity, thereby safeguarding the credibility of digital media platforms.
Combating Misinformation and Fake News: In line with global efforts to combat misinformation and fake news, the decree-law contains provisions aimed at curbing the dissemination of false or misleading information. Media outlets are obligated to verify the accuracy of information before publishing or sharing it, thereby promoting responsible journalism and safeguarding public trust.
Protection of Privacy and Personal Data: RecogniSing the importance of privacy rights in the digital age, the decree-law includes provisions to protect the privacy and personal data of individuals. Media outlets are required to adhere to strict data protection regulations and obtain consent before collecting, processing or disclosing personal information.
Enforcement Mechanisms and Penalties: To ensure compliance with the provisions of the decree-law, robust enforcement mechanisms have been established, empowering regulatory authorities to take appropriate action against violations. Penalties for non-compliance may include fines, suspension of operations, or revocation of licenses, depending on the severity of the offense.
Impact and Implications: The enactment of Federal Decree-Law No. 55 of 2023 represents a significant milestone in the regulation of digital media in the UAE. By establishing clear guidelines and accountability mechanisms, the decree-law aims to promote responsible journalism, protect public interest and foster a vibrant and trustworthy media ecosystem.
As the digital media landscape continues to evolve, regulatory frameworks must adapt to address emerging challenges and safeguard the integrity of the media environment.
Federal Decree-Law No. 55/2023 reflects the UAE government's commitment to promoting transparency, accountability and professionalism in the digital media sector, ensuring that media practitioners and platforms operate in accordance with the highest standards of ethics and integrity.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
Spotify has been hit with a lawsuit in New York federal court that accuses the streaming giant of underpaying songwriting royalties for tens of millions of songs.
The lawsuit against Spotify USA was filed in New York on Thursday by the Mechanical Licensing Collective (MLC), a non-profit that collects and distributes royalties owed from music streaming services.
The suit alleges that Spotify on March 1, without advance notice, reclassified its paid subscription services, resulting in a nearly 50 per cent reduction in royalty payments to MLC.
The complaint cites a Billboard report that estimates Spotify's move could cost songwriters nearly $150 million over the next year.
"Spotify paid a record amount to publishers and societies in 2023 and is on track to pay out an even larger amount in 2024," a Spotify spokesperson said in a statement. "We look forward to a swift resolution of this matter."
MLC chief executive Kris Ahrend said in a statement that the collective "takes seriously its legal responsibility to take action on behalf of our members when we believe usage reporting and royalty payments are materially incorrect."
US law allows streaming services like Spotify to obtain a blanket "compulsory license" to copyrighted music at a specific royalty rate. The US Copyright Office appointed MLC to collect royalties for songwriters and music publishers.
The group's lawsuit said that after adding audiobook access, Spotify incorrectly recharacterised its service in a way that would significantly reduce the amount of royalties it owed under the license, "even though there has been no change to (Spotify's) Premium plan and no corresponding reduction to the revenues that Spotify generates."
"Spotify's attempt to reduce its mechanical royalties has resulted in a clear breach of its obligations," the complaint said. The MLC asked the court for an unspecified amount of monetary damages for Spotify's alleged unpaid royalties and late fees.
The case is Mechanical Licensing Collective v. Spotify USA Inc, US District Court for the Southern District of New York, No. 1:24-cv-03809.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
In an age where digital connectivity dominates our lives, the prevalence of cybercrimes, such as online bullying and privacy violations, has become a growing concern worldwide, including in the United Arab Emirates (UAE).
With the rise of social media platforms and online communication channels, individuals are increasingly vulnerable to various forms of cyber threats, ranging from harassment and defamation to identity theft and financial fraud.
To address these challenges and protect the rights of individuals in the digital sphere, the UAE has implemented robust cybercrime laws and established specialised agencies to combat online offenses.
Understanding these laws and knowing how to report cybercrimes are essential steps towards ensuring a safe and secure online environment for all residents.
Cybercrime Laws in the UAE
The UAE has enacted comprehensive cybercrime laws to address various types of online offenses and protect individuals' rights in the digital space.
One of the primary legal instruments governing cybercrimes in the country is Federal Decree-Law No. 5 of 2012 on Combating Cybercrimes, commonly known as the Cybercrime Law.
This law criminalises a wide range of cyber offenses, including:
Online Bullying and Harassment: The Cybercrime Law prohibits the use of electronic communication channels to engage in bullying, harassment, or defamation of individuals. Offenders can face imprisonment and significant fines for such offenses.
Privacy Violations: Unauthorised access to, interception of, or disclosure of electronic communications or personal data without consent is considered a violation of privacy under the Cybercrime Law.
This includes actions such as hacking into email accounts, spreading private information online, or illegally obtaining sensitive data.
Identity Theft: The Cybercrime Law also criminalises identity theft and impersonation, including the fraudulent use of another person's identity or the creation of fake online profiles for malicious purposes.
Financial Fraud: Engaging in online scams, phishing schemes, or other forms of financial fraud is punishable under the Cybercrime Law. This includes fraudulent activities aimed at deceiving individuals or organizations for financial gain.
How to Report Cybercrimes
If you are a victim of online bullying, privacy violation, or any other form of cybercrime in the UAE, it is essential to report the incident to the appropriate authorities promptly. Here's how you can file a cybercrime report:
Contact the UAE's Cybercrime Reporting Centre: The UAE's Cybercrime Reporting Centre, operated by the Telecommunications Regulatory Authority (TRA), serves as the primary point of contact for reporting cybercrimes.
You can reach the center via phone, email, or online form to file a complaint and seek assistance.
Provide Detailed Information: When reporting a cybercrime, provide as much detailed information as possible about the incident, including the nature of the offense, any relevant evidence (such as screenshots or emails), and the identities of the perpetrators, if known.
Cooperate with Law Enforcement: After filing a cybercrime report, law enforcement authorities may launch an investigation into the matter.
It is essential to cooperate fully with the authorities and provide any additional information or assistance they may require during the investigation process.
Seek Legal Advice: If you believe your rights have been violated or you have suffered damages as a result of a cybercrime, consider seeking legal advice from a qualified attorney in the UAE.
A legal professional can help you understand your rights, navigate the legal process and pursue appropriate legal remedies.
In conclusion, cybercrimes pose significant threats to individuals' safety, privacy and security in the digital age.
By understanding the cybercrime laws in the UAE and knowing how to report cybercrimes effectively, residents can play a crucial role in combating online offenses and promoting a safer online environment for all.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
Abu Dhabi-based Core42 has unveiled Jais Chat, a bilingual AI mobile application now available for download on iOS. This chatbot, developed to meet the growing demand for Generative AI capabilities, is poised to revolutionise digital interactions within the region.
Core42, a subsidiary of Abu Dhabi’s G42 artificial intelligence and cloud company, is a leading provider of sovereign cloud, cybersecurity, and AI infrastructure solutions.
Jais Chat's interface resembles popular AI interfaces such as OpenAI’s ChatGPT and Microsoft’s CoPilot, providing users with a familiar yet advanced platform.
Tailored to meet the expanding usage of GenAI, Jais Chat enables users to access information, find solutions and engage in seamless conversations using various prompts.
Leveraging G42’s extensive language model for Arabic, named Jais and developed in collaboration with Mohamed bin Zayed University of Artificial Intelligence and Silicon Valley-based Cerebras Systems, Jais Chat sets a new standard for Arabic language processing.
“With its Arabic-first approach, Jais is redefining how bilingual individuals interact with technology,” commented Andrew Jackson, Executive Vice President and Chief AI Officer at Core42. “Jais Chat represents a significant step forward in our mission to democratize AI access worldwide.”
Core42 has announced plans for future iterations of Jais Chat, which will include enhanced functionalities such as document processing, voice conversation capabilities, and enterprise support with customizable subscription models.
The app’s name, Jais, pays homage to the UAE’s highest peak in the Emirate of Ras Al Khaimah, symbolising its ambition to achieve new heights in AI innovation.
At its core lies Jais 30B, hailed as the world’s most performant Arabic Large Language Model (LLM).
Trained on a vast dataset comprising 126 billion Arabic tokens, 251 billion English tokens, and 50 billion code tokens, Jais Chat boasts unparalleled proficiency in Arabic language processing and accuracy, rivaling top-performing English language models of similar magnitude.
Jackson revealed that “Since Jais' inception in August 2023, the response has been overwhelmingly positive. With the recent launch of JAIS 30B, we’ve witnessed a significant enhancement in its performance metrics. With its Arabic-first approach, Jais is redefining how bilingual individuals interact with technology.”
Key Features
*Bilingual Capability: Fluent in both Arabic and English.
*Cultural and Linguistic Sensitivity: Engineered with an Arabic-centric model for efficient processing of Arabic text.
Unique Features
Generative AI Power: Capable of summarisation, content generation and information retrieval with an Arabic-first approach.
Exciting updates in the pipeline for Jais Chat include document processing, customisable user settings, voice conversation capabilities, and an enterprise support and subscription model tailored to businesses seeking advanced functionalities.
Despite Arabic being spoken by approximately 400 million people worldwide, its representation in AI developments has historically been limited.
Jais Chat aims to bridge this gap by offering a cutting-edge platform that caters to the unique linguistic and cultural needs of Arabic speakers, marking a significant milestone in the evolution of AI technology.
Jais Chat’s launch opens up new possibilities for the region, promising to revolutionise government communications, elevate customer service automation and empower workforces across various sectors.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
Dubai Police have launched the 'on-the-go' initiative as part of their efforts to enhance accessibility and convenience to the public. Whether it's a minor car accident or the need to report a crime, this initiative offers swift assistance and services to both residents and visitors.
Partnering with fuel supply companies like ENOC, ADNOC and Emarat, Dubai Police brings these services directly to motorists. They can report minor traffic incidents, hit-and-runs, request police assistance, vehicle repairs, or report lost and found items.
Utilising smart devices and advanced technology, this initiative handles various services and procedures on the streets, eliminating the need for physical visits to police stations, thus making the process more convenient.
Operating across 138 service stations in the emirate, the 'on-the-go' initiative provides a range of services, including vehicle repairs, accident reports, police assistance and lost and found services. Fuel station personnel assist motorists in reporting minor accidents and obtaining accident reports, reducing waiting times and assisting police patrols in maintaining traffic flow.
Motorists can get their vehicles repaired after reporting accidents at select stations, with some eligible for free repairs, such as seniors, people with disabilities and pregnant women. Others can benefit from the service for a fee.
Additionally, motorists can report lost/found items through the Dubai Police Smart app, streamlining the process and reducing time and effort. Residents can also report cybercrimes or suspicious activities through the app, website, or at Smart Police Stations (SPS) for prompt assistance.
The Police Eye service allows residents to report crimes for enhanced public safety and community well-being, available in six languages through the Dubai Police app and website.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
TikTok has taken legal action against the federal government on Tuesday, aiming to thwart a newly enacted law that mandates its China-based parent company to divest the popular video-sharing app within the next year or face a complete ban in the US.
The lawsuit, filed in a federal appeals court in Washington DC, seeks a court order to halt the enforcement of the bipartisan legislation, dubbed the Protecting Americans From Foreign Adversary Controlled Applications Act.
This law, signed by President Biden last month after swift approval by Congress, has been challenged by TikTok as "unconstitutional." The company argues that divesting within the mandated 12-month timeline is "simply not possible: not commercially, not technologically, not legally."
According to the lawsuit, TikTok asserts that the Act will inevitably lead to the shutdown of the platform by January 19, 2025, affecting the 170 million Americans who use it for communication purposes. The company is urging the court to declare the law as violating the US Constitution and to provide any necessary relief.
A spokesperson for the White House referred inquiries to the Justice Department, which declined to comment on the lawsuit. Meanwhile, representatives for the House Select Committee on China, which backed the bill, have not responded to requests for comment.
The law mandates ByteDance, TikTok's parent company, to divest its stake in the app by January 19, 2025, or one day before President Biden’s term concludes. The President has the option to extend this window by three months if satisfactory progress is being made towards a deal.
Tuesday’s legal action is expected to halt this timeline and potentially delay a ban for several years, as reported by NBC News.
TikTok alleges that the Chinese government has indicated it would not allow divestment of the recommendation engine crucial for TikTok's success in the US.
Additionally, the company claims that relocating its source code to the US would be a lengthy process, requiring years and a new team of engineers to manage.
The outcome of the lawsuit may hinge on the level of national security concerns that prompted Congress to pass the law. Gautam Hans, associate clinical professor of law at Cornell University, suggests that TikTok stands a strong chance, citing potential First Amendment issues with the law.
Critics have accused TikTok of serving as a tool for the Chinese Communist Party, facilitating activities ranging from election interference to promoting terrorist propaganda and exacerbating teenage mental health issues.
Despite TikTok's efforts to address these concerns and ensure platform security, critics remain adamant about the app's threat to national security.
Jacob Helberg, a member of the US-China Economic and Security Review Commission, dismisses TikTok's lawsuit as lacking seriousness, emphasising the documented ties between ByteDance and the CCP.
Last year's resurgence in calls for a US ban on TikTok was fueled by concerns about its content moderation policies. These concerns escalated following instances where pro-Palestinian content gained substantial traction, as well as a trend where users shared videos endorsing terrorist rhetoric.
In March, the Office of the Director of National Intelligence concluded that TikTok had been used by the Chinese Communist Party to influence US elections, engaging in malign influence operations.
Despite TikTok's significant economic contributions to the US economy, including a reported $24.2 billion in 2023, the divestiture law moved forward despite the company's extensive lobbying efforts.
The prospect of a forced sale has attracted interest from various parties, including former Treasury Secretary Steven Mnuchin and former Activision-Blizzard CEO Bobby Kotick. Mnuchin has reportedly been presenting potential investors with plans to acquire TikTok and rebuild its recommendation algorithm within the US, potentially circumventing China's strict technology export regulations.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
In an era where the click of a mouse can unleash chaos and havoc, the United Arab Emirates (UAE) stands at the forefront of safeguarding its digital realm.
With the rapid expansion of technology comes an inevitable rise in cybercrime, prompting the UAE to enact stringent laws and regulations to combat this evolving threat.
Zero Tolerance Policy
The UAE leaves no stone unturned in its battle against cybercriminals. Under the Cybercrime Law (Federal Law No. 5 of 2012), a comprehensive legal framework is in place to address a wide range of cyber offenses, from hacking and phishing to online fraud and identity theft.
This legislation underscores the UAE's unwavering commitment to maintaining the integrity and security of its digital infrastructure.
Swift Justice
Cybercrime perpetrators beware: the UAE justice system is swift and uncompromising. Offenders face severe penalties, including hefty fines and lengthy prison sentences, depending on the nature and severity of their crimes.
The Cybercrime Law empowers law enforcement agencies to investigate, prosecute and punish cyber offenders swiftly and effectively.
Reporting cybercrime in the UAE is a crucial step in combating digital threats and protecting yourself and others from online harm. Here's a guide on how to report cybercrime in the UAE:
Contact UAE Cybercrime Reporting Authorities
Police: The first point of contact for reporting cybercrime in the UAE is typically the police. You can reach out to the nearest police station or contact the Dubai Police Cyber Crime Department directly.
Telecommunications Regulatory Authority (TRA): The TRA oversees telecommunications and Internet-related issues in the UAE. They also handle cybercrime complaints and provide assistance and guidance on reporting procedures.
Provide Detailed Information
Follow Reporting Procedures
Cooperate with Authorities
Seek Legal Advice if Necessary
Despite the challenges posed by cybercrime, the UAE remains committed to fostering innovation and digital transformation. With initiatives like the Dubai Cyber Security Strategy and the Abu Dhabi Digital Authority, the UAE aims to create a secure and resilient digital ecosystem that enables innovation while safeguarding against cyber threats.
In the digital age, cybersecurity is paramount, and the UAE stands firm in its resolve to combat cybercrime and protect its digital citizens. With robust laws, swift justice, international cooperation and a commitment to innovation, the UAE sets a shining example of proactive cybersecurity governance in the global arena.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
Cybercheck, a software hailed for its role in aiding investigations and convictions in serious criminal cases, is now facing legal scrutiny. While its founder claims over 90 per cent accuracy, defense lawyers allege perjury and misinformation regarding its efficacy and application.
Law enforcement agencies across states like Colorado to New York have increasingly relied on Cybercheck, an artificial intelligence tool, for assistance in solving murder cases, human trafficking crimes, cold cases, and manhunts.
However, as its usage expands, doubts regarding its accuracy and transparency have emerged, particularly from defense attorneys who question its methodology and lack of independent validation.
The software, developed by Adam Mosher, purportedly utilises machine learning to analyse vast online data, including social media profiles and publicly available information, to aid in suspect identification and crime scene analysis.
Mosher claims a remarkable accuracy rate exceeding 90 per cent, asserting that Cybercheck streamlines investigations that would otherwise demand hundreds of human hours. As of last year, it had been deployed in nearly 8,000 cases across 40 states and nearly 300 agencies.
However, legal challenges have arisen, casting doubt on Cybercheck's reliability. In a New York case, a judge excluded Cybercheck evidence due to its unproven reliability and acceptance. Similarly, in Ohio, a judge blocked its analysis when Mosher declined to disclose its methodology.
Critics argue that the lack of transparency surrounding Cybercheck's algorithms violates defendants' due process rights. In a recent motion filed in an Ohio robbery case, defense lawyers demanded access to Cybercheck's proprietary code and algorithm, alleging that Mosher misled authorities about his expertise and the software's usage.
Mosher's refusal to provide access to Cybercheck's inner workings has intensified skepticism. The Canadian company behind Cybercheck, Global Intelligence Inc., has remained silent, citing ongoing legal proceedings.
Despite these challenges, law enforcement agencies continue to utilise Cybercheck, often under contracts worth thousands of dollars. In one instance, Akron signed a $25,000 agreement for Cybercheck's services.
In the Akron case, where two defendants were charged with murder, Cybercheck reportedly produced a report linking them to the crime scene through online data analysis. However, the defense has raised concerns about the report's credibility, highlighting discrepancies and lack of verifiable evidence.
At a hearing, Mosher claimed a high accuracy rate for Cybercheck's conclusions, yet the methodology behind this assertion remains unclear. Additionally, Mosher admitted that Cybercheck has never undergone peer review, further fueling doubts about its reliability.
As legal battles over Cybercheck's admissibility continue, questions persist regarding its role in shaping criminal investigations and court proceedings.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
Elon Musk's carmaker Tesla has sued an Indian battery maker for infringing its trademark by using the brand name "Tesla Power" to promote its products, seeking damages and a permanent injunction against the company from a New Delhi judge.
Tesla in a hearing at the Delhi High Court this week said the Indian company had continued advertising its products with the "Tesla Power" brand despite a cease-and-desist notice sent in April 2022, according to details of the proceedings posted on the court website on Friday.
During the hearing, the Indian company, Tesla Power India Pvt Ltd, argued its main business is to make "lead acid batteries" and it has no intention of making electric vehicles. The judge allowed the Indian firm three weeks to submit written responses after it handed over a set of documents in support of its defence, the court record shows.
Musk's Tesla is incorporated in Delaware, and it has accused the Indian company of using trade names "Tesla Power" and "Tesla Power USA". The court record included screenshots of a website that showed that Tesla Power USA LLC was also headquartered in Delaware and had been "acknowledged for being a pioneer and leader in introducing affordable batteries" with "a very strong presence in India".
A Tesla Power representative told Reuters it has been present in India much before Musk's Tesla and had all government approvals. “We have never claimed to be related to Elon Musk's Tesla,” Tesla Power's Manoj Pahwa said.
Tesla told the judge it discovered the Indian company was using its brand name in 2022 and has unsuccessfully tried stop it from doing so, forcing it to file the lawsuit. The case comes after Musk cancelled his planned visit to India on April 21 to meet Prime Minister Narendra Modi.
Days later, Musk made a surprise visit to China and made progress towards rolling out its advanced driver assistance package, a move that many Indian commentators called a snub. The Tesla India trademark case will next be heard on May 22.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
A new law to regulate the use of artificial intelligence (AI) has been approved in Bahrain. Under this law, individuals exploiting AI technologies to make decisions requiring human intervention or assessment may face fines of up to BD1,000.
The newly approved legislation, consisting of 38 articles, was unanimously passed by the Shura Council. Proposed by a group of five members, led by Vice-Chairman of the Human Rights Committee, Ali Al Shehabi, the law will now be drafted by the government as formal legislation and referred to Parliament within six months.
The legislative and legal affairs committee of the Shura Council recommended the law's approval after consulting with officials from various ministries and agencies, including Interior, Health, Education, Cabinet Affairs, Information, Transportation and Telecommunications, Industry and Commerce, Parliament and Shura Affairs, as well as Justice, Islamic Affairs and Endowments. Feedback was also sought from entities such as the National Space Science Agency, Bahrain Polytechnic, Information and eGovernment Authority, Telecommunications Regulatory Authority and Tamkeen.
Committee Chairwoman Dallal Al Zayed described the review process as complex and challenging, emphasising that its implementation would mark a pioneering decision in the region.
Ali Al Shehabi emphasized the growing significance of AI in various domains and stressed the importance of regulating it to prevent potential misuse and future risks. He highlighted Bahrain's intention to integrate AI-driven services across sectors while also addressing concerns about potential criminal activities, such as tampering with voice features, biometrics, official documents, audio and video.
According to the law, individuals utilising AI technologies to make decisions requiring human intervention or assessment may face fines of up to BD1,000.
Additionally, fines of up to BD2,000 may be imposed on those programming or processing AI systems to infringe upon privacy, personal freedoms, social values, or traditions. Misusing AI for discrimination or purposes other than intended could also lead to fines of up to BD2,000.
Penalties ranging from BD2,000 to BD5,000 are stipulated for the unauthorised use of autobots or robots. Programming, processing, inserting, or developing AI systems without a licence could result in fines ranging from BD1,000 to BD10,000.
Serious offenses, such as tampering with official speeches or using AI for deception, manipulation, or malicious intent, may result in imprisonment for up to three years or fines ranging from BD5,000 to BD20,000, or both. Deliberate use of AI to incite unrest, political disturbances, sabotage, or terrorism-related activities may lead to a minimum of three years' imprisonment.
The law also holds establishments accountable for offenses committed by individuals under their employment, with repeat violations potentially resulting in permanent closure or court-determined penalties.
Regarding minors, Chairwoman Dr Fatima Al Kooheji raised concerns about the clarity of consequences and punishments, suggesting the need for awareness campaigns before enforcing the law.
In conclusion, the law establishes a framework to regulate AI use, outlining penalties for various offenses and establishing a special unit for AI oversight.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
WhatsApp told the Delhi High Court that forcing them to break message encryption would mean the end of the platform in India. The company argues that its end-to-end encryption protects user privacy and cannot be compromised. India is one of the largest markets for Facebook-owned messaging app WhatsApp. The app has over 900 million users in India.
The Delhi High Court is currently hearing a challenge by WhatsApp and Meta (formerly Facebook) against a new Indian law that requires social media platforms to identify the originators of messages upon court order.
WhatsApp argues that complying with this law would undermine their encryption and violate user privacy. “As a platform, we are saying, if we are told to break encryption, then WhatsApp goes,” stated Tejas Karia, lawyer for WhatsApp.
The messaging platform emphasises that user privacy is a core value and that end-to-end encryption is essential for maintaining it. Users trust WhatsApp because their messages remain confidential and unreadable by anyone except the sender and receiver.
The Indian government, however, argues that tracing message originators is crucial for tackling harmful content and maintaining online safety. They believe social media platforms have a responsibility to help identify those who spread misinformation or incite violence.
“The idea behind the guidelines was to trace the originator of the messages,” said Kirtiman Singh, representing the central government. He added that some mechanism for tracing messages is necessary, especially considering the challenges WhatsApp has faced in the US Congress.
The Delhi High Court acknowledged the complexity of the situation. It observed that "privacy rights were not absolute" and “somewhere balance has to be done”, the HC observed.
The court has postponed the case for further hearing later in August 2024.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
Congress has passed a bill that could lead to the ban or forced sale of TikTok, marking a significant move against the popular video-sharing platform's Chinese ownership over concerns related to national security.
The Senate voted 79 to 18 in favour of the measure, included as part of a larger package offering aid to Israel, Ukraine, and Taiwan. President Biden plans to sign the bill into law on Wednesday.
Once enacted, the provision will give TikTok's parent company, ByteDance, approximately nine months to sell the app or face a national ban, with the possibility of a 90-day extension.
This bipartisan measure represents a substantial threat to TikTok's US operations, which boast over 170 million users and have become a major economic and cultural force.
Lawmakers cite worries that ByteDance's ownership could potentially compromise American data security, a claim that TikTok disputes.
TikTok is expected to challenge the legislation, setting the stage for a significant legal battle asserting free speech rights for its millions of users. Despite TikTok's efforts to sway lawmakers, including urging users to contact representatives and running ads promoting data security, these actions have not deterred Congress.
The legislative push comes after years of scrutiny over TikTok's ties to China, with concerns about user data vulnerability. TikTok had proposed measures to address these concerns, but negotiations stalled, prompting lawmakers to pursue legislation empowering the executive branch to act against the platform.
Efforts to pass this bill gained momentum recently, with key lawmakers and administration officials collaborating for months. House lawmakers strategically paired the TikTok bill with legislation targeting data privacy concerns, allowing for swift advancement through Congress.
Despite bipartisan support, some lawmakers oppose the legislation, fearing government overreach and potential restrictions on online speech. However, the bill's inclusion in a broader foreign aid package facilitated its passage, demonstrating effective legislative maneuvering.
This unexpected turn of events highlights the complex process of policymaking, underscoring the intersection of national security, privacy, and free speech concerns in the digital age.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
The US District Judge Yvonne Gonzalez Rogers in Oakland, California, has ruled in favour of Meta Platforms CEO Mark Zuckerberg, dismissing some claims in multiple lawsuits alleging that he concealed the harmful effects of Facebook and Instagram on children.
The ruling is part of a broader litigation involving numerous lawsuits filed by children, accusing Meta and other social media companies of fostering addiction to their platforms.
While twenty-five of these cases sought personal liability against Zuckerberg, arguing that his public image and influential role obligated him to fully disclose the risks posed by Meta's platforms to children, Judge Rogers rejected this argument.
She stated that relying on Zuckerberg's unique understanding of Meta's products to establish a personal duty to each plaintiff would set a precedent for a duty to disclose for any public figure, which she deemed untenable.
Meta, though remaining a defendant, refrained from commenting on the ruling, maintaining its denial of any wrongdoing.
The lawsuits, filed on behalf of individual children, assert that social media usage has caused them physical, mental, and emotional distress, including anxiety, depression and in extreme cases, suicide.
The ongoing litigation seeks both damages and an end to the alleged harmful practices of the defendants. Additionally, several states and school districts have also initiated legal action against Meta, with those cases still pending.
(The writer is a legal associate at NYK Law Firm, one of the top legal consultants in Dubai)
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
A lawyer representing FTX founder Sam Bankman-Fried filed a notice of appeal challenging his federal fraud and conspiracy conviction along with his 25-year prison sentence. Bankman-Fried's appeal comes two weeks after receiving the sentence in US District Court in Manhattan, which also included a forfeiture order of $11 billion for his involvement in a massive fraud scheme at the cryptocurrency exchange FTX and the related hedge fund, Alameda Research.
Prosecutors described this as one of the largest financial frauds in history. The appeal, anticipated by legal experts, will be reviewed by a three-judge panel at the 2nd Circuit US Court of Appeals in Manhattan.
Federal criminal defendants face substantial challenges in overturning convictions, with fewer than 10 per cent of appeals resulting in reversals. Should Bankman-Fried's appeal fail at the 2nd Circuit, his next recourse would be petitioning the US Supreme Court, though success at this stage is typically rare.
Bankman-Fried, aged 32, was convicted after a trial in November on seven counts of fraud and conspiracy related to the misappropriation of approximately $10 billion in customer funds.
According to the Manhattan US Attorney's Office, Bankman-Fried orchestrated a scheme to embezzle customer funds for investments, political donations across party lines, personal expenses and repayment of loans taken out by Alameda Research.
During sentencing, Judge Lewis Kaplan expressed concerns about Bankman-Fried's future conduct, remarking, "There is a risk that this man will be in a position to do something very bad in the future," emphasising the gravity of the situation and the absence of any expression of remorse from the defendant.
Bankman-Fried, who comes from a family of Stanford Law professors, has suggested that FTX's financial troubles stemmed from a "liquidity crisis" or "mismanagement," rather than intentional wrongdoing.
Four other senior executives from FTX and Alameda have previously pleaded guilty. One of them, Ryan Salame, is scheduled for sentencing on May 28 before Judge Kaplan. Sentencing dates have yet to be determined for Caroline Ellison, former CEO of Alameda; FTX technology chief Gary Wang and Nishad Singh, the former engineering head at FTX.
(The writer is a legal associate at NYK Law Firm, one of the top legal consultants in Dubai)
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
The UK government is reportedly preparing to announce plans to prohibit children under the age of 16 from accessing social media platforms within a few weeks. Downing Street is expected to unveil proposals for stricter age limits on apps such as Instagram, Facebook and Snapchat as part of a consultation aimed at enhancing online safety for children, as per The Sunday Times.
The consultation will gather feedback from parents on the appropriate age for children to start using social media, with the suggested range being between 13 and 16 years old. Currently, several platforms allow membership for children as young as 13, including Meta, which recently lowered the minimum age for WhatsApp use in Europe to 13.
The decision was criticised by Smartphone Free Childhood as an instance of “a tech giant prioritising shareholder profits over children’s safety”. A Meta spokesperson stated, “We provide all users with options to control who can add them to groups, and when you receive a message from an unknown number for the first time, we offer the option to block and report the account.”
This development follows a call from Esther Ghey, mother of murdered 16-year-old transgender girl Brianna Ghey, for a social media ban for under-16s. In addition to potential social media restrictions, the government is contemplating banning under-16s from purchasing smartphones. Currently, individuals under 18 need parental consent to obtain phone contracts, but they can buy pay-as-you-go phones independently.
The proposed changes would limit this option for under-16s, although parents would still be able to buy phones for their children. A spokesperson for the Department for Science, Innovation, and Technology stated: “We do not comment on speculation. Our commitment to making the UK the safest place for children online is firm, as demonstrated by our world-leading Online Safety Act.”
(The writer is a legal associate at NYK Law Firm, one of the top legal consultants in Dubai)
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
iPhone users in 92 countries have received warnings from Apple about potential spyware attacks targeting their devices, according to a report by TechCrunch.
The message informs users that they may be targeted by a mercenary spyware attack attempting to compromise their iPhones remotely. The notification reads, "Apple detected that you are being targeted by a mercenary spyware attack that is trying to remotely compromise the iPhone associated with your Apple ID -xxx-."
Apple's alert provides further details about the incident, stating, "This attack is likely targeting you specifically because of who you are or what you do. Although it's never possible to achieve absolute certainty when detecting such attacks, Apple has high confidence in this warning — please take it seriously."
Apple clarified that it could not disclose specific details that triggered the warning due to concerns that sharing more information could aid attackers in evading detection. The company relies on internal information and investigations to identify such attacks.
(The writer is a legal associate at NYK Law Firm, one of the top legal consultants in Dubai)
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
Investment firms led by the former CEO of the SPAC that merged with Donald Trump’s media company allege that their files were hacked and stolen by a current member of the media company’s board of directors.
In a federal civil lawsuit filed in South Florida last month, the firms accuse board member Eric Swider of plotting a coup in early 2023 to replace Patrick Orlando as CEO of the special purpose acquisition company, Digital World Acquisition Corp.
As part of that attempted ouster, Swider and others allegedly “stole access” to the firms’ computer systems and then “used the stolen information to attack” Orlando, according to the lawsuit.
It was “an audacious scheme to seize control of and enlarge their holdings,” claims the suit, which was filed by Benessere Investment Group and ARC Global Investments II.
The suit seeks damages and an injunction “prohibiting the use of the stolen information and to stop the defendants hacking” the firms’ files.
Orlando was fired from Digital World in March 2023 and replaced by Swider. That blank check company last month completed a merger to take Trump Media & Technology Group Corp. public, allowing it to trade on the Nasdaq Stock Market. The company, which owns the Trump-centric social media app Truth Social and trades under the ticker DJT, soared in its stock market debut but those gains have since erased.
The Florida lawsuit is just one in a series of messy and dramatic legal disputes that have come to define Trump Media’s rocky road to an IPO, and its equally turbulent first weeks as a public company.
DWAC in July settled fraud charges with the Securities and Exchange Commission, though the agency found the SPAC had submitted “materially false and misleading” filings.
Trump Media in late March sued its co-founders over alleged mismanagement of the merger, and is seeking to bar them from owning the company’s stock.
Those co-founders have sued Trump Media in Delaware Chancery Court over their stake in the company.
Critics, meanwhile, have labelled the company a meme stock and a “scam.” They point to the company’s reported net loss of $58.2 million on revenue of just $4.1 million in 2023.
In an interview with media earlier Wednesday, Swider denied all of the allegations against him. “I just think he’s never let go [of] the fact that I replaced him,” Swider told the outlet. “I don’t know why it offends him so bad.”
The Lawsuit
The Florida lawsuit, which was filed shortly before the late March merger, presents Orlando as successful in his efforts to bring DWAC into a merger agreement with Trump Media.
It alleges that Swider misled DWAC’s directors and business partners by publishing “false and misleading representations of what was occurring” at the company. He also allegedly “offered outsized compensation to the other directors he enlisted to collude with him in exchange for supporting his coup d’état.”
Swider stood to massively increase his compensation through his accession to CEO of DWAC — but he also wanted to take control of ARC II, which owned about 19 per cent of DWAC prior to the merger, according to the lawsuit.
Trump Media in an April 1 regulatory filing reported that ARC II owns 6.9 per cent, or about 9.5 million shares, of the post-merger company. Information about ARC II was held in an account on an electronic file storage website owned by Benessere, the suit says.
To access the account, which “stores the lifeblood” of both investment firms, Swider allegedly enlisted Cano, Orlando’s former assistant. The firms accuse Swider of promising to make Cano the president of DWAC in exchange for access to the account.
Cano agreed, and Swider “made good on his promise,” while also providing Cano with a convertible note worth 165,000 shares of DWAC’s stock — an award valued at more than $6 million at the time, the suit alleges.
Swider said in the interview with Wired that Orlando voted for Cano’s award, adding that he never hired Cano as his assistant, as the suit alleges. The lawsuit says that Cano since February 2023 repeatedly accessed the storage account and “immediately” provided the information within it to Swider.
Swider then used it to email “false and defamatory claims” about Orlando to ARC II’s members, according to the suit. In a March 5 email — included in the lawsuit as “Exhibit A” -- Swider accused Orlando of “failure to maintain a fiduciary responsibility” to ARC II, among a litany of other claims.
“Patrick has threatened me with pending litigation for speaking out to fellow membership holders so I want to be clear about this. I am not disparaging Patrick,” Swider wrote in the email.
“I am sure he is an amazing Human being, Honest. Hardworking. Looking out for your best interest. He is good looking. He is cool. I like him. Nothing in this email is meant to be defamatory. He has been great as a leader. Patrick- you are Awesome!!”
Orlando later discovered the email because Swider “failed to remove Orlando’s wife from the mailing list,” according to the lawsuit.
(The writer is a legal associate at NYK Law Firm, one of the top legal consultants in Dubai)
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
Meta Platforms, the parent company of Facebook, Instagram and Threads, announced plans to introduce labels for artificial intelligence-generated audio, image, and video content starting next month. The labelling initiative aims to address concerns about misleading content on its platforms.
The company clarified that it will specifically label content generated using AI technology and will refrain from removing it unless it violates platform policies or presents significant risks.
Meta acknowledged that its current policy, established in 2020, is too narrow as it only addresses videos altered or created through AI. Monika Bickert, Meta's vice-president of content policy, highlighted the rapid evolution of AI technology, noting the emergence of realistic AI-generated audio and photos over recent years.
In response to feedback from its oversight board, which engaged with over 120 stakeholders across 34 countries, Meta conducted a public opinion poll involving more than 23,000 respondents from 13 countries. The poll revealed strong support (82 per cent of respondents) for adding warning labels to AI-generated content.
The global AI industry is projected to attract investments of up to $200 billion by 2025, potentially significantly impacting GDP, according to a report by Goldman Sachs Economic Research in August.
Despite the industry's growth, regulatory bodies are struggling to keep pace with technological advancements. In December, the EU introduced the landmark Artificial Intelligence Act, imposing fines exceeding €35 million ($38.4 million) for non-compliance.
Meta emphasised a commitment to freedom of expression and revealed that its oversight board recommended a "less restrictive" approach to addressing manipulated media through contextual labelling.
Meta will employ its own detection methods to identify AI-generated content and will label media based on user disclosures of AI use during uploads.
In cases where digitally-created or altered content poses a significant risk of public deception, Meta may apply more prominent labels to provide additional context.
Meta clarified that content removal, whether AI-generated or human-created, will be reserved for select cases violating platform rules, such as those pertaining to voter interference, bullying, violence, or incitement as outlined in its community standards.
Additionally, Meta employs nearly 100 independent fact-checkers who can demote false or altered content in users' feeds and attach overlay labels to provide further context.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
Google has agreed to dispose of billions of data records to resolve a lawsuit alleging that it clandestinely tracked the Internet activities of users who believed they were browsing in private.
The terms of the settlement were submitted in the federal court in Oakland, California, pending approval by US District Judge Yvonne Gonzalez Rogers.
Estimated by plaintiffs' attorneys at over $5 billion and potentially as high as $7.8 billion, the settlement does not entail any damages paid by Google. However, individual users retain the right to sue the company for damages.
Initiated in 2020, the class action encompasses millions of Google users who employed private browsing since June 1, 2016.
Users contended that Google's analytics, cookies and apps enabled its subsidiary Alphabet's new tab unit to improperly monitor individuals who set Google Chrome browser to "Incognito" mode and other browsers to "private" browsing mode.
This allegedly transformed Google into an "unaccountable repository of information," granting access to details ranging from users' social circles, culinary preferences, leisure pursuits, shopping tendencies, to the most intimate and potentially sensitive online searches.
According to the settlement terms, Google will enhance disclosures regarding its data collection practices in "private" browsing, a process already underway. Additionally, it will allow Incognito users to block third-party cookies for a period of five years.
Plaintiffs' lawyers highlighted that this would result in Google gathering less data from users' private browsing sessions, consequently reducing its revenue from data monetisation.
Jose Castaneda, a spokesman for Google, expressed the company's satisfaction with the settlement, branding the lawsuit as meritless and emphasising that Google never associates data with individual users in Incognito mode.
Castaneda reiterated Google's commitment to deleting obsolete technical data that was never linked to an individual or utilised for personalisation.
David Boies, representing the plaintiffs, hailed the settlement as a pivotal move towards demanding transparency and accountability from dominant technology entities.
A preliminary settlement was reached in December, forestalling a scheduled trial on February 5, 2024, with terms undisclosed at the time. Plaintiffs' attorneys intend to subsequently pursue unspecified legal fees payable by Google.
Alphabet, Google's parent company, is headquartered in Mountain View, California.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
Dubai is gearing up to host a groundbreaking event this month with the inaugural Photonics Middle East conference set to take place from April 19 to 22, 2024.
Expected to draw in 400 scientists and researchers from around the world, the event will be held at the prestigious Mohammed Bin Rashid University of Medicine and Health Sciences in Healthcare City.
Dr PT Ajith Kumar, the Technology Chair and Convener of the event, emphasised the significance of this gathering amidst a pivotal moment in science and technology.
"The event arrives at a crucial juncture as we witness a transformative shift from electronics to photonics," remarked Dr Kumar. Photonics, encompassing the science and technology of light and light-based devices, now permeates every facet of human existence, from information communication technology and artificial intelligence to defence and aerospace, education and healthcare and green energy production and manufacturing.
Photonics Middle East uniquely balances the interests of research and development, industry and academia, according to a press note issued by the organisers.
The conference's focal points span an array of critical areas including photonics in medicine and medical diagnostics, artificial intelligence, robotics and communication, green energy, nano-photonics, photonic chips and integrated optics, aerospace, marine and offshore, manufacturing and fabrication, bio-photonics, photonic structures and materials, laser holography and diffractive optics, lasers for healthcare, immersive learning and Metaverse, document security and identification, photonic crystals and materials, precision non-destructive testing and ultra-high density information storage and archiving.
In addition to insightful research presentations by leading global experts, the conference will feature four enriching workshops tailored for students and participants, along with an exhibition titled Photonics Innovation and Solutions.
A key feature of the event is the Photonics Business Conclave, anticipated to draw policy makers, industry leaders, R&D professionals, healthcare experts and institutions, academic institutions and start-ups.
Photonics Middle East is co-organised by Photonics Innovations, Dubai, and Photonyx Global, USA, with support from various departments and institutions.
The gathering promises to be a defining moment in the advancement of photonics, offering a platform for collaboration, innovation and transformative progress on a global scal
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
In today's digital age, data protection has become a critical concern for organisations across all sectors, especially those operating in the United Arab Emirates (UAE).
With the UAE's stringent data privacy regulations impacting businesses collecting or processing personal data within its jurisdiction, it's crucial for organisations to take proactive measures to establish or enhance their data protection programs.
Below are key steps that organisations should take to ensure compliance and effectively navigate the landscape of data protection in the UAE.
1 Appointing a Data Protection Officer (DPO)
One of the initial and crucial steps is the appointment of a Data Protection Officer (DPO). This individual plays a pivotal role in overseeing data privacy compliance within the organisation.
Whether the DPO is an internal employee or outsourced to a third party with expertise in data privacy, having a designated DPO showcases the organisation's commitment to upholding privacy standards and cooperating with regulatory requirements.
2 Establishing Comprehensive Consent Mechanisms
Organisations must develop robust consent forms and disclosures for processing personal data. While obtaining explicit consent is fundamental, the law also specifies instances where data processing can occur without consent, such as for public interest, legal proceedings, public health protection, compliance with other laws, and specific limited purposes. Ensuring clear and comprehensive consent mechanisms is essential for compliance.
3 Reviewing Vendor and Supplier Contracts
Conducting a thorough review of contracts with vendors and suppliers is imperative. Organisations need to identify agreements involving data sharing and ensure that these parties adhere to UAE data protection laws. Contractual revisions should reflect data privacy compliance requirements and delineate liabilities effectively, thereby mitigating risks associated with data processing by third parties.
4 Creating Data Mapping and Processing Records
Maintaining a transparent data map and a Record of Processing Activity is crucial for compliance documentation. These records outline the specific processes and systems that utilise personal data, aiding in accountability and demonstrating adherence to regulatory standards.
5 Developing a Comprehensive Breach Response Plan
Organisations should establish a robust breach response plan along with notification procedures. Being able to promptly detect data breaches, initiate response protocols, notify regulators and affected individuals and conduct thorough data analysis are critical components of compliance readiness.
6 Implementing Privacy Impact Assessments
Conducting Data Protection Impact Assessments (DPIAs), Vendor Assessment Questionnaires, and Privacy Impact Assessments (PIAs) are essential for evaluating privacy risks and obligations. These assessments inform policy development, technology assessments, and decision-making regarding partnerships and data processing activities.
7 Strengthening Information Security Measures
Collaboration with IT teams is essential in implementing robust information security and access control mechanisms. These measures are instrumental in preventing unauthorized access, safeguarding data integrity, and ensuring compliance with data protection regulations.
8 Streamlining DSAR Processes
Efficient Data Subject Access Request (DSAR) processes are vital for addressing data subject inquiries promptly and effectively. Leveraging technology workflows, audit trails, and standardised procedures enhances the efficiency and transparency of DSAR handling.
9 Addressing Cross-Border Data Transfers
Organisations engaged in cross-border data transfers must assess the adequacy of data protection in recipient jurisdictions. Special controls, safeguards and documentation may be required to facilitate compliant data transfers while ensuring the protection of personal data.
10 Conducting Ongoing Staff Training
Continuous staff training is crucial for cultivating a culture of data privacy compliance within the organisation. Regular training sessions enable employees to stay updated on evolving regulations, best practices and organisational policies related to data protection.
In conclusion, navigating data protection in the UAE requires a comprehensive and proactive approach from organisations. By implementing these essential steps, organisations can enhance their data protection posture, build stakeholder trust and effectively navigate the complex regulatory landscape in the UAE.
(Thw writer is a legal associate at Dubai-based NYK Law Firm)
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
Microsoft has announced plans to globally separate its chat and video app, Teams, from its Office product, following antitrust scrutiny.
The decision comes six months after the company unbundled the two products in Europe to avoid potential fines from the European Commission, which has been investigating Microsoft's tying of Office and Teams since a complaint filed in 2020 by Slack, a competing workspace messaging app owned by Salesforce.
Teams, originally added to Office 365 for free in 2017, replaced Skype for Business and saw increased popularity during the pandemic, particularly for its video conferencing capabilities. However, rivals argued that bundling the products gave Microsoft an unfair advantage.
To address concerns and provide clarity to customers, Microsoft has decided to extend the separation of Teams from Office globally, a move initially implemented in the European Economic Area and Switzerland on October 1 last year. The decision aims to offer multinational companies more flexibility in their purchasing decisions across different regions.
Analysts suggest that while Microsoft's previous concessions in response to antitrust scrutiny, notably regarding Internet browsers in 1998, led to significant changes in the market, the impact of separating Teams from Office might not be as dramatic given the entrenched nature of enterprise products like Teams.
Despite the separation, Microsoft's user base for Teams has remained relatively stable, according to data from Sensor Tower. The company has also introduced new commercial Microsoft 365 and Office 365 suites without Teams for regions outside the European Economic Area and Switzerland, along with standalone Teams offerings for enterprise customers in those regions.
Customers have the option to continue with their current licensing agreements or switch to the new offerings, with prices for Office without Teams ranging from $7.75 to $54.75 for existing customers and $5.25 for standalone Teams. However, exact pricing may vary by country and currency.
While Microsoft's efforts to unbundle Teams from Office may not fully alleviate antitrust concerns, proactive measures could potentially influence regulators' stance. The company faces the risk of significant fines, up to 10 per cent of its global annual turnover, if found guilty of antitrust breaches, having accumulated 2.2 billion euros ($2.4 billion) in EU antitrust fines over the past decade.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
A group of writers suing OpenAI for copyright infringement in California failed to convince a New York federal court to halt related cases brought in Manhattan by the New York Times, the Authors Guild and others.
US District Judge Sidney Stein stated that the writers, including Michael Chabon, Ta-Nehisi Coates and comedian Sarah Silverman, did not have a strong enough interest in the New York cases to justify letting them intervene.
The writers had sought to convince the New York court to dismiss the cases against OpenAI and Microsoft, OpenAI's largest financial backer, or move them to California. The California court rejected a related request last month.
"It's unconventional to proceed with the same claims in different places but certainly something we are equipped to handle," the writers' attorney Joseph Saveri said in a statement on Monday.
Representatives for OpenAI did not immediately respond to a request for comment. Spokespeople for Microsoft, the New York Times and the Authors Guild declined to comment.
Several groups of copyright owners have sued major tech companies over the alleged misuse of their work to train generative artificial-intelligence systems. The authors in the California case sued OpenAI last summer, accusing it of using their books without permission to train the AI model underlying its popular chatbot ChatGPT.
The Authors Guild filed a similar lawsuit in New York in September on behalf of other writers including John Grisham and George RR Martin. That lawsuit was followed by additional complaints from nonfiction authors and the Times.
The California authors told Stein that allowing the "copycat" cases to continue would lead to inconsistent rulings and waste resources. But Stein on Monday said that the California and New York cases had "substantial differences."
"More importantly, for the claims that do overlap, the California Plaintiffs have no legally cognizable interest in avoiding rulings that apply to entirely different plaintiffs in a different district," Stein said.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
In today's dynamic media landscape, mergers and acquisitions (M&A) are common strategies employed by companies to expand market presence, acquire new technologies and capitalise on emerging opportunities.
However, given the critical role of intellectual property (IP) in the media industry, conducting thorough due diligence is essential to mitigate risks and ensure the success of M&A transactions.
This study provides a detailed analysis of the due diligence process concerning intellectual property rights (IPR) in media mergers and acquisitions.
Intellectual property assets, including copyrights, trademarks, patents, trade secrets and proprietary technologies, are invaluable assets in the media sector.
They underpin content creation, distribution, licensing, and revenue generation. Therefore, understanding and safeguarding these assets are paramount in M&A transactions to preserve value and mitigate legal and financial risks.
What are the Objectives of IP Due Diligence?
What ate the Key Components of IP Due Diligence?
1. Identification of Intellectual Property Assets: Conduct a comprehensive inventory of all IP assets, including content, brands, technologies, and patents.
2. Ownership and Title Verification: Verify ownership rights, chain of title, and validity of registrations for each IP asset.
3. Assessment of Rights and Licenses: Review agreements, licenses, and contracts to ascertain the scope of rights granted and any restrictions or obligations.
4. Evaluation of IP Portfolio: Assess the strength, uniqueness, and marketability of each IP asset in relation to the target company's business objectives.
5. Risk Analysis and Compliance: Identify legal, regulatory, and infringement risks associated with IP assets and assess compliance with applicable laws and standards.
6. Litigation and Enforcement History: Review past and pending litigation, disputes, or enforcement actions related to IP rights and evaluate potential liabilities.
7. Technology and Innovation: Evaluate the target company's R&D activities, innovation pipeline, and proprietary technologies to assess the value of IP assets.
8. Complexities of Digital Rights Management: With the proliferation of digital content, managing rights and licensing agreements becomes increasingly complex.
9. Globalisation and Cross-Border Issues: M&A transactions involving media companies often involve international IP rights, requiring careful consideration of cross-border regulations and jurisdictional issues.
10. Rapid Technological Advancements: Emerging technologies such as artificial intelligence, virtual reality, and blockchain pose new challenges and opportunities in IP due diligence.
11. Cultural and Creative Considerations: Media content often involves cultural sensitivities and creative nuances that must be addressed in IP due diligence.
Intellectual property due diligence is a critical aspect of M&A transactions in the media industry, ensuring that buyers understand the value, risks, and opportunities associated with IP assets.
By conducting thorough due diligence, companies can mitigate risks, protect their investments, and position themselves for long-term success in the competitive media landscape.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
Apple, Google and Meta Platforms are under scrutiny for potential violations of the EU's new Digital Markets Act (DMA), European antitrust regulators announced on Monday.
This could lead to substantial fines for these tech giants. The law, in effect since March 7, seeks to challenge the dominance of these companies by facilitating easier transitions between competing online services, such as social media platforms, internet browsers and app stores, ultimately fostering an environment for smaller companies to compete.
Breaches could result in fines of up to 10 per cent of the companies' global annual turnover. Concurrently, US antitrust regulators are also investigating Big Tech for alleged anti-competitive practices, potentially leading to divestitures.
Tech companies claim to have allocated significant resources to meet the Digital Markets Act's requirements, particularly concerning the designation of six "gatekeepers." However, the European Commission expressed doubts about the adequacy of their efforts, as reported by Reuters.
In response to queries about the rapidity of the investigations post the act's implementation, EU industry chief Thierry Breton emphasised the importance of upholding the law promptly, stating, "The law is the law. We can't just sit around and wait."
The investigation centres on whether Apple complies with obligations regarding the uninstallation of software applications, changing default settings and providing choice screens for rival services on its iOS operating system.
Additionally, regulators are concerned about "steering," assessing whether Apple limits app developers from informing users about offers outside its App Store.
Apple expressed confidence in its compliance with the DMA, highlighting its responsiveness to the Commission and developers' feedback.
The Commission highlighted Apple and Alphabet's fee structures, stating they contradict the DMA's "free of charge" requirement, particularly as both companies recently introduced new fees for some services.
Breton urged Meta to offer free alternative options, following criticism of its no-ads subscription service introduced in Europe.
Google and Meta stated their commitment to comply with the act's guidance, with Google asserting significant changes to its services and readiness to defend its approach.
The Commission is also investigating Apple's new fee structure for alternative app stores and Amazon's ranking practices on its marketplace.
Amazon, designated as a DMA "gatekeeper," affirmed its compliance with the act and ongoing collaboration with the European Commission.
The EU executive aims to conclude investigations within a year, as outlined under the DMA, directing companies to retain relevant documents for current and future probes.
These investigations follow mounting criticism from app developers and business users regarding perceived shortcomings in the companies' compliance efforts.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
The Abu Dhabi Judicial Department (ADJD) has launched the latest version of its application, aiming to provide customers with an integrated and advanced platform for easy access to their judicial files and stay updated on developments in all courts and prosecution units in the Emirate of Abu Dhabi.
The initiative leverages the latest technologies and technological means supported by business intelligence (BI) processes.
His Excellency Counselor Yousef Saeed Alabri, Undersecretary of the Abu Dhabi Judicial Department, highlighted that the release of the new version of the ADJD app aligns with ongoing efforts to further develop the judicial system in line with the vision of His Highness Sheikh Mohammed bin Zayed Al Nahyan, President of the UAE, and the directives of His Highness Sheikh Mansour bin Zayed Al Nahyan, Vice President of the UAE, Deputy Prime Minister, Chairman of the Presidential Court and Chairman of the Abu Dhabi Judicial Department.
These efforts aim to continuously update and improve services to provide smart and innovative solutions that reinforce the competitive position of the Emirate of Abu Dhabi globally.
The Judicial Department has made significant progress in implementing digital transformation requirements, in accordance with its Strategic Plan objectives and priorities, as well as its programs and projects focusing on technical development and smart services.
H.E. Yousef Alabri noted that these initiatives are enhanced by smart solutions and proactive procedures, offering multiple options through service centers and transactions via various smart devices.
The latest version of the Judicial Department's application, linked to the UAE Pass (digital ID), enables users to track case files and their status in courts and public prosecution units. Users can review case details, upload documents, file applications and pay fines and amounts due in judicial cases using multiple digital payment solutions such as Apple Pay and Google Pay.
The update also allows users to update their International Bank Account Number (IBAN) for court cases, track the hearing schedule, attend hearings remotely and access inquiry services on cases and criminal file status.
Additionally, users can access notary public and authentication transactions and digital marriage contracts.
A new notification feature has been introduced to keep litigants informed of judgments, necessary procedures and alerts regarding developments in their judicial files, court hearings, and submitted applications, guiding customers on subsequent actions required.
It's important to note that the Abu Dhabi Judicial Department continuously updates and develops this app to incorporate more judicial and legal services into a single integrated platform.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
Amid escalating cyber threats, UAE residents are urged to remain vigilant against various forms of malware and vulnerabilities. The proliferation of scams and cyber threats necessitates constant vigilance in today's digital landscape.
Instances of fraudulent activities, including impersonations of reputable entities like Dubai Police, local banks and governmental bodies, are witnessing a surge. Hence, it's crucial to conduct routine security assessments to identify and address any weaknesses within your system.
Recently, the UAE Cyber Security Council issued a warning on social media, highlighting the prevalence of deceptive phishing emails aimed at compromising online security.
Staying abreast of evolving scam tactics is vital, with experts noting a staggering 3.4 billion spam emails sent daily, many of which are phishing attempts disguised as legitimate correspondence.
These deceptive emails often masquerade as communications from courier services regarding package deliveries or prompt urgent actions from trusted banks for account verification. Residents are urged to exercise caution before clicking on links or divulging personal information, as this could lead to financial jeopardy. Here are some tips to identify phishing emails:
Phishing attacks can manifest in various forms, including emails, text messages, phone calls, or social media posts. Regardless of the delivery method, they all aim to trick recipients into downloading infected attachments or visiting counterfeit websites.
Despite the UAE's robust efforts to combat cyber threats, public sector entities continue to face an average of 50,000 daily cybersecurity attacks, a figure exacerbated by global geopolitical tensions.
While legislative measures have been implemented to address cybercrime, including Federal Law No. 34/2021, public awareness and vigilance remain paramount in thwarting fraudulent activities and safeguarding against potential scams.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
Researchers in Denmark are harnessing artificial intelligence and data from millions of people to help anticipate the stages of an individual's life all the way to the end, hoping to raise awareness of the technology's power and its perils.
Far from any morbid fascinations, the creators of life2vec want to explore patterns and relationships that so-called deep-learning programs can uncover to predict a wide range of health or social "life-events".
"It's a very general framework for making predictions about human lives. It can predict anything where you have training data," Sune Lehmann, a professor at the Technical University of Denmark (DTU) and one of the authors of a study recently published in the journal Nature Computational Science, told AFP. For Lehmann, the possibilities are endless.
"It could predict health outcomes. So it could predict fertility or obesity, or you could maybe predict who will get cancer or who doesn't get cancer. But it could also predict if you're going to make a lot of money," he said.
The algorithm uses a similar process as that of ChatGPT, but instead, it analyses variables impacting life such as birth, education, social benefits, or even work schedules.
The team is trying to adapt the innovations that enabled language-processing algorithms to "examine the evolution and predictability of human lives based on detailed event sequences".
"From one perspective, lives are simply sequences of events: People are born, visit the pediatrician, start school, move to a new location, get married and so on," Lehmann said.
Yet the disclosure of the program quickly spawned claims of a new "death calculator", with some fraudulent sites duping people with offers to use the AI program for a life expectancy prediction -- often in exchange for submitting personal data.
The researchers insist the software is private and unavailable on the internet or to the wider research community for now.
Data from Six Million
The basis for the life2vec model is the anonymised data of around six million Danes, collected by the official Statistics Denmark agency.
By analysing sequences of events it is possible to predict life outcomes right up until the last breath. When it comes to predicting death, the algorithm is right in 78 per cent of cases; when it comes to predicting if a person will move to another city or country, it is correct in 73 per cent of cases.
"We look at early mortality. So we take a very young cohort between 35 and 65. Then we try to predict, based on an eight-year period from 2008 to 2016, if a person dies in the subsequent four years," Lehmann said.
"The model can do that really well, better than any other algorithm that we could find," he said.
According to the researchers, focusing on this age bracket -- where deaths are usually few and far between -- allows them to verify the algorithm's reliability.
However, the tool is not yet ready for use outside a research setting.
"For now, it's a research project where we're exploring what's possible and what's not possible," Lehmann said. He and his colleagues also want to explore long-term outcomes, as well as the impact of social connections have on life and health.
A Scientific Counterweight
For the researchers, the project presents a scientific counterweight to the heavy investments into AI algorithms by large technology companies.
"They can also build models like this, but they're not making them public. They're not talking about them," Lehmann said.
"They're just building them to, hopefully for now, sell you more advertisements, or sell more advertisements and sell you more products."
He said it was "important to have an open and public counterpoint to begin to understand what can even happen with data like this".
Pernille Tranberg, a Danish data ethics expert, told AFP that this was especially true because similar algorithms were already being used by businesses such as insurance companies.
"They probably put you into groups and say: 'Okay, you have a chronic disease, the risk is this and this'," Tranberg said. "It can be used against us to discriminate us so that you will have to pay a higher insurance premium, or you can't get a loan from the bank, or you can't get public health care because you're going to die anyway," she said.
When it comes to predicting our own demise, some developers have already tried to make such algorithms commercial.
"On the web, we're already seeing prediction clocks, which show how old we're going to get," Tranberg said. "Some of them aren't at all reliable."
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
Rupert Murdoch’s British tabloid papers allegedly intercepted Prince Harry’s landline phones and accessed the messages on the pager of his late mother Princess Diana, as disclosed by the British royal’s legal team to the London High Court.
Harry, the younger son of King Charles and the late Princess Diana, along with more than 40 others, are suing News Group Newspapers (NGN) over allegations of unlawful activities by journalists and private investigators associated with its tabloids, the Sun and the now-defunct News of the World, spanning from the mid-1990s until 2016.
In a ruling last July, Judge Timothy Fancourt allowed Harry to proceed to trial with claims of unlawful information gathering, while dismissing allegations of mobile phone hacking due to being filed too late.
During a hearing at the High Court on Thursday, Harry’s legal team sought to amend his lawsuit in response to the ruling, and to introduce additional allegations.
These new claims include assertions that the Sun commissioned private investigators to target his then-girlfriend and now-wife Meghan in 2016, as well as accusations of widespread phone bugging.
According to court documents, Harry's lawyers stated: “The claimant also brings a claim and seeks relief in relation to the interception of landline calls, the interception of calls from cordless phones and analogue mobile calls and the interception of landline voicemails, as distinct from phone hacking.”
The claim also involves allegations regarding Diana, who "was under close surveillance and her calls were being unlawfully intercepted by NGN, which was known about by its editors and senior executives."
NGN is contesting the addition of what they referred to as a “significant number of new allegations” for various reasons, including their late submission, lack of evidence, and their overlap with previously dismissed phone-hacking claims.
NGN’s lawyers argued in court filings: “They cover time periods falling outside the scope of the current pleading and the generic statements of case, and in many cases relate to allegations which have been well-publicised for as long as 30 years.”
NGN’s lawyers also expressed doubts about the feasibility of Harry's case being heard at a trial expected to commence in January next year if his new allegations were to be included.
In 2011, NGN issued an apology for widespread phone hacking by journalists at the News of the World, a publication that Murdoch subsequently shut down due to public outcry. Despite settling over 1,300 claims since then, NGN has consistently denied any wrongdoing by Sunstaff.
During proceedings on Wednesday, lawyers representing Harry and other claimants asserted that Murdoch and other senior executives were complicit in covering up widespread misconduct, providing false evidence to courts, parliament and a public inquiry.
NGN contends that some claimants are utilising these lawsuits as a means to attack the tabloid press and dismisses allegations against its current and former staff as “a baseless and cynical assault on their integrity.”
Since stepping back from royal duties in 2020 to relocate to California, Harry has focused on confronting the British press, alleging intrusion into his private life since childhood and dissemination of false information about him and his loved ones.
In December, Harry won a lawsuit against Mirror Group Newspapers over allegations of phone hacking and unlawful activities, with the judge acknowledging that senior figures were aware of the wrongdoing.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
Italy's Prime Minister Giorgia Meloni is taking legal action and seeking €100,000 ($109,345) in damages after explicit deepfake videos depicting her were created and circulated online without her consent.
Deepfake technology involves digitally superimposing one person's face onto another's body. The videos in question emerged in 2022, predating Meloni's appointment as Italy's Prime Minister.
Authorities have identified and charged a 40-year-old man and his 73-year-old father with defamation for allegedly creating and uploading the manipulated videos, which superimposed Meloni's face onto pornographic material.
According to a report by the BBC, the police were able to locate the accused individuals by tracking the smartphone used to upload the videos. Under Italian law, certain forms of defamation can constitute criminal offenses, potentially resulting in imprisonment. Meloni is slated to testify before a court on July 2.
The indictment asserts that the altered videos were uploaded to a pornographic website based in the United States, amassing "millions of views" over several months.
Meloni's legal team has characterised the €100,000 damages claim as "symbolic," affirming that the Prime Minister plans to donate the entire sum to organisations aiding women who have suffered gender-based violence.
Maria Giulia Marongiu, Meloni's attorney, said: "The demand for compensation will send a message to women who are victims of this kind of abuse of power not to be afraid to press charges."
Deepfakes represent a type of synthetic media generated using artificial intelligence (AI) to manipulate visual and audio content, often with malicious intent, to appear genuine.
The term "deepfake" originated in late 2017 on Reddit when a user by the same name established a platform for sharing pornographic videos created with open-source face-swapping technology.
As AI capabilities advance, deepfakes have become increasingly realistic and widespread, posing a significant threat to public trust and information integrity.
These highly convincing fake audio and video recordings can be exploited to spread misinformation, sway public opinion, and damage reputations by depicting individuals engaging in actions or making statements they never actually did.
The proliferation of deepfakes has prompted global leaders to express concerns about their potential for misuse and the propagation of disinformation.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.
The UAE Public Prosecution has successfully concluded its ambitious project, "Classification of Crimes and Digitisation of Criminal Legislation within the Criminal Case Management System."
This pioneering initiative involved the conversion of legal texts into a digital format compatible with information systems, leveraging advanced artificial intelligence (AI) techniques for comprehension and execution.
Aligned with the visionary leadership's directives, the project aimed to harness human and institutional capabilities for attaining a prominent position in digital transformation.
A specialised workforce comprising 30 prosecutors and seven technicians from the Information Technology Department played a pivotal role in this endeavour. Collectively, they devoted an impressive total of 3,821 working hours to meticulously scrutinise, individualise and encode laws into the newly developed system.
This rigorous process resulted in the digitisation of over 17 federal laws and the detailed classification of 32,000 criminal charges, encompassing a wide range of acts, penalties and legal circumstances.
Furthermore, the project is poised to enhance the speed, efficiency and transparency of the penal system through the integration of modern technologies. It will propel the ongoing evolution of digital systems and judicial processes, reinforcing the UAE's status as a global hub and a leading digital governance model.
The Public Prosecution emphasised that the project will streamline tasks and procedures, reducing bureaucratic obstacles by enabling electronic systems to function autonomously. It also aims to automate electronic communication with strategic partners and simplify searches within legal frameworks.
The initiative is expected to establish a benchmark for future legislation, aligning with the evolving landscape of artificial intelligence and emerging technologies.
For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004. Follow The Law Reporters on WhatsApp Channels.