- The Glitch
- Posts
- AI Regulations: Understanding Their Impact on Our Future
AI Regulations: Understanding Their Impact on Our Future
Hello!👋
How’s your Thanksgiving Day celebration going so far?🌽
This week, we’re going to explore AI regulations and what they mean for us.
So, let’s start with a recent news story.
AI REGULATION: KNOW WHAT'S NEW
Germany, France, and Italy reach agreement on future AI regulation
Key details:
Germany, France, and Italy propose mandatory self-regulation for AI companies, regardless of size.
Meta has decided to focus on generative AI while also supporting initiatives related to responsible AI use.
The Open Empathetic Project aims to grasp expressions and tone shifts besides just words to ensure empathetic AI-human interactions.
Now, let’s discuss 2 things: the risks posed by AI (including threat levels!), and the challenges and solutions surrounding AI regulation.
RISKS POSED BY AI
Did you know that 55% of working professionals believe that generative AI poses moderate brand safety and misinformation risks? Here are some negative consequences of AI worth noting:
Existential Risk: AI's rapid advancement poses existential risks, where its capabilities could surpass human control, leading to unforeseen and potentially catastrophic outcomes.
Government Surveillance: The integration of AI in surveillance systems could lead to unprecedented levels of government monitoring, potentially infringing on individual privacy and freedoms.
Hyperrealistic Propaganda and Deep Fakes: AI's ability to create highly convincing fake content could be used to manipulate public opinion, undermine trust in information sources, and destabilize democratic processes.
AI's Agency Beyond Human Control: There's a risk that AI could develop a form of agency or decision-making capacity that is not aligned with human values or controllable by humans, leading to decisions that could be detrimental to societal well-being.
Ethical Dilemmas and Warfare: AI's application in military and warfare raises significant ethical concerns, including the potential for autonomous weapons systems and the escalation of conflicts beyond human control.
Clash of Values: The utilitarian nature of AI might conflict with Western liberal values, leading to ethical and governance challenges that could be exploited by tyrannical regimes.
Mitigation Challenges: While there are upsides to AI, the challenges in mitigating its risks are significant, requiring global cooperation and foresight, which might be difficult to achieve in the current geopolitical climate.
To better mitigate risks, the EU has proposed the classification of risks as per categories.
THE RISKS LEVELS
Image Source: European Commission
“Unacceptable Risk”
It focuses on systems that are clear threats to people’s livelihoods. Here are a few examples:
Cognitive behavioral manipulation (e.g., encouraging children to use dangerous toys)
Social scoring (classifying people based on behavior, socio-economic status, and other characteristics)
Real-time, remote biometric identification systems like facial recognition
High Risk
AI systems that can negatively affect safety or fundamental rights will be classified as high risk. Products falling under the EU’s safety legislation, like toys, aviation, cars, medical devices, and lifts are included.
How does it all work in practice for providers of high risk AI systems?
These are the 8 AI systems that will be registered in an EU database and be assessed before being allowed to operate in the market:
Biometric identification and categorization of natural persons
Management and operation of critical infrastructure
Education and vocational training
Employment, worker management, and access to self-employment
Access to and enjoyment of essential private services and public services and benefits
Law enforcement
Migration, asylum, and border control management
Assistance in legal interpretation and application of the law
Limited Risk
These are AI systems that only have to comply with minimal transparency requirements. The important thing is that the user’s consent is necessary. They should be able to decide whether they want to use AI-based audio or video generation software, for example.
Minimal Risk
It refers to systems where users are aware they are interacting with a machine and have the power to choose whether they wish to cone.
Did you know?
There is an ongoing debate among experts about how advanced AI applications like GPT-4 should be classified. This discussion not only delves into the technical aspects but also raises important ethical and regulatory questions. Read more here.
“Mark my words, AI is far more dangerous than lukes..why do we have no regulatory oversight?”
AI REGULATION: CHALLENGES AND SOLUTIONS
Challenges:
1. The Red Queen Problem
Does the name sound interesting? This problem arises when the regulatory framework struggles to keep pace with the fast-changing AI landscape. There is a need for agile and capable regulations that can adapt to the speed of AI developments.
2. What To Regulate?
A one-size-based regulation does not work when it comes to AI. Rather, it is critical to focus on targeted, risk-based regulations. Deciding what classifies as “high risk” is a challenge.
Examples of what to regulate include the creation of fake audio, images, and text. Plus, AI has the potential to propagate misleading information that diverts from reality. Research shows that over 75% of consumers are concerned about misinformation from AI.
3. Who Regulates And How?
There is a need to determine which entity or agency will oversee the implementation of AI practices/systems.
Pros Of AI Regulations
Adherence to safety standards
Public trust in AI systems
Risk mitigation while preventing over-regulation of low-risk AI applications
Cons Of AI Regulations
Over-regulation may adversely impact innovation
Regulatory practices like licensing may be captured by dominant players, limiting the competition
Lack of international cooperation may prevent the implementation of ethical systems
“In a short time, we would see regulations which are industry-specific and country-specific, because, in some way, they need to work for the local context.”
THE PROPOSED SOLUTIONS
1. International Influence and Cooperation
It is necessary for countries to come together and discuss the future of AI systems and how to mitigate risks. It means coming to negotiations related to AI usage that best protect society as a whole.
2. Policy And Act Establishments
Policies and acts, like the act proposed by the European Union, aim to set rules in place about AI usage. These can help ensure AI is regulated and does not cause harm to individuals. Other than this act, the Global Governance AI Initiative and the Model AI Governance Framework are examples of proposed regulations.
It is still challenging to adopt AI regulations that set a framework as an international standard. Hence, no matter what policies are set in place, AI cannot be completely regulated.
“The challenge with harmonizing regulations internationally is that countries need to be coming together to agree on a political stance, to an extent, an economical stance as well as bringing different legal cultures, it’s a hard feat.”
Now let’s talk about how you can stand out in a world where AI and regulations are the talk of the town.
HOW TO BE A GLITCH
🔥 Develop An Ethical Mindset: Learn about the changing landscape in the context of AI and regulations. Discover innovative ways in which you can ensure compliance to set yourself apart.
🔥 Be Adaptable: Learn about multiple AI applications across multiple industries. Your ability to adapt can make you indispensable.
🔥 Build A Network: Go beyond being just a glitch-be a part of a group of glitches! Proactively collaborate on projects and share insights with them to make an impact in your chosen field.
Now, aren’t you eager to know more about AI regulations? Check out these cool resources.
RESOURCES
Learn: WHO has outlined recommendations regarding the use of AI in healthcare.
Know: Understand how regulating artificial intelligence is like trying to do magic.
Read: Learn about the regulatory challenges involving GenAI.
Understand: Know about the 5 best practices for businesses in the field of AI contracting.
Now, it’s time for us to move on to some AI tools that can enhance your productivity.
AI TOOLKIT
ImageShield: It uses AI algorithms to alter image compositions and prevent AI scraping.
DMCA Force: The tool uses automation to streamline The Digital Millennium Copyright Act takedown act and assist artists in removing infringing content.
SmartFrame: It provides artists with tools to ensure copyright compliance by ensuring image protection, tracking, and control over usage rights.
Red Points: Through Red Points, you can employ deep learning technology to protect against intellectual property violations and ensure copyright protection for artists.
AI Explainability 360: The tool includes a set of algorithms to explain decisions made by AI models, which ensures transparency and trust.
Imatag: It uses invisible watermarking techniques to protect images so artists can protect their work online.
Pixalate: The ad fraud detection platform helps artists protect their digital content from fraudulent scraping and misuse.
Acrolinx: It is an AI content governance tool to ensure consistency and originality of artistic content. Acrolinx protects against misuse and plagiarism.
Get this exclusive Custom Instructions Catalogue for ChatGPT for FREE when you refer The Glitch to at least two of your friends or family. You help us grow, we’ll help you grow! 🤝
Are AI regulations helping us become more creative, or will they hinder us? Let me know what you think.
Your friend,
Ash
Reply