AI legislation asks, “If not now, when?” The world must regulate Generative AI before it takes employment.
Since its start, AI has been more about employment loss than productivity. AI has changed daily operations. This week, UN Secretary-General Antonio Guterres supported an international AI monitor.
“Generative AI alarms are deafening. “And the developers who designed it are loudest,” Guterres told reporters.
Sam Altman, OpenAI founder and CEO, discussed AI regulation with a media outlet in India last week. Altman believes that AI is currently growing too slowly, therefore any regulation must be for large companies.
Powerful economies and digital giants are deciding on a regulatory framework. Google’s SAIF conceptual framework secures AI systems. On Wednesday, EU lawmakers advanced the world’s first AI law.
According to media sources, European Commissioner for the internal market Thierry Breton stated AI creates societal, ethical, and economic problems. However, now is not the moment to press pause. It’s about speed and accountability.
Republic has reported on Google’s decision to postpone the EU launch of its chatbot “Bard” after authorities voiced concerns about the company’s data protection compliance.
“Generative AI exists. In a few years, 70% of the globe has used it.
Manish Sinha, Chief Marketing Officer of STL, told Republic that an international watchdog monitoring AI might create global norms, enforce responsibility, and overcome transnational impediments.
“It’s time for a watchdog to monitor technology’s growing hazards. “ChatGPT is super intelligent and regulating the super-intelligent AI technology required for regulating AI system risks just like the international atomic energy agency that promotes peaceful use of nuclear energy,” Karma Global MD & CVO Pratik Vaidya told Republic.
“Delays will lead to disinformation and privacy abuses. “In the meantime, if technology leaders worldwide can establish a moratorium, this will reduce profound risks to humanity and society and hold people accountable for new technologies to avoid mistakes,” Pratik said.
“Transparency and accountability standards can detect and mitigate fraudulent behavior,” Mudrex CTO & Co-Founder Alankar Saxena told Republic.
AI watchdogs are needed for 5 main reasons
- Data Privacy
Concerns exist with AI model data. Concern over AI data usage drives major economies’ AI global watchdog strategy. Data provided to the AI algorithm may be hazardous and misused.
- Generative AI Potential
Since its powers are unknown, Generative AI and its byproducts will become more powerful and flexible over time. Growing companies like OpenAI have compelled major names to use generative AI even if its effects are uncertain.
- AI Bias
The company’s AI algorithm has prevented certain applicants from getting jobs. AI bias, which feeds on social data that includes prejudice, has also worried experts.
- To protect geopolitics
Regulating AI models would unite nations against AI warfare, the most dreaded prospect. No national regulation means nobody knows who is utilizing AI well.
- Automation Push
AI will improve industry automation, which will affect jobs.