Google committed to generative AI last week. At its annual I/O conference, the business unveiled intentions to integrate AI technologies into most of its products, including Google Docs, coding, and online search. (Read my story)
Google’s announcement matters. Powerful, cutting-edge AI models will enable billions of people generate content, answer inquiries, write, and debug code. In his I/O analysis, MIT Technology Review editor-in-chief Mat Honan says that AI is now Google’s main product.
Google adds these features gradually. But problems will certainly arise soon. The corporation hasn’t fixed these AI models’ common issues. They lie. They’re still manipulable. Attacks are still possible. Disinformation, frauds, and spam may easily leverage them.
Big Tech funds the open-source AI explosion.
These AI technologies are currently unregulated due to their youth. That’s unsustainable. As the post-ChatGPT excitement fades, authorities are asking critical questions about the technology and calling for regulation.
US officials want to control sophisticated AI tools. After a nice “educational” meal with legislators the night before, OpenAI CEO Sam Altman will testify before the US Senate this week. Vice President Kamala Harris met with Alphabet, Microsoft, OpenAI, and Anthropic CEOs last week before the hearing.
Harris said corporations had a “ethical, moral, and legal responsibility” to make safe goods. New York Senator Chuck Schumer, the majority leader, has suggested AI regulation legislation that may create a new agency to police it.
Everyone wants to appear active. “There’s a lot of social anxiety about where all this is going,” says Stanford Institute for Human-Centered Artificial Intelligence privacy and data policy fellow Jennifer King.
King adds, “It will depend on to what extent [generative AI] is being seen as a real, societal-level threat.” However, Federal Trade Commission chief Lina Khan has gone out “guns blazing,” she says. Khan penned an op-ed earlier this month pushing for AI legislation to avoid the mistakes made by inadequate tech regulation. She said that US authorities will utilize antitrust and commercial practices laws to govern AI.
Europe is nearing an AI Act agreement. The European Parliament approved a draft law banning public facial recognition technologies last week. It prohibits predictive policing, emotion detection, and online biometric data harvesting.
The EU will restrict generative AI, and the parliament wants huge AI model makers to be more open. Labeling AI-generated material, summarizing copyrighted data used to train the model, and preventing models from creating illicit content are these tactics.
However, the EU has yet to establish generative AI standards, and many of the AI Act’s suggested aspects will not be included. The parliament, European Commission, and EU member countries still have serious discussions. AI Act implementation will take years.
Tech leaders are pushing the Overton window while regulators struggle. At an event last week, Microsoft’s top economist, Michael Schwarz, suggested we should wait until AI causes “meaningful harm” before regulating it. After scores of accidents killed individuals, driver’s licenses were introduced. Schwarz said there must be some harm to recognize the underlying issue.
It’s outrageous. AI damage has been recognized for years. Bias, AI-generated false news, and frauds exist. Other AI systems have imprisoned innocent individuals, kept them in poverty, and falsely accused tens of thousands of people of fraud. Due to Google’s statement, generative AI’s damages will expand exponentially.
How much harm can we tolerate? It’s enough.
Deeper Learning
Big Tech funds the open-source AI explosion. How long?
New open-source big language models—alternatives to Google’s Bard or OpenAI’s ChatGPT—are falling like sweets from a piñata. Researchers and app developers may examine, build on, and tweak them. These free, smaller copies of major corporations’ best-in-class AI models perform almost as well.
AI development is at a crossroads. Access to these models has spurred innovation. It can also reveal faults. Open-source growth is risky. Most open-source releases still rely on major models from wealthy companies. Boomtowns might become backwaters if OpenAI and Meta close. Read Will Douglas Heaven.
Bits and Bytes
Amazon’s ChatGPT-like household robot is a secret.
Leaked documents indicate plans for an improved Astro robot that can remember what it sees and understands and answer questions and take directions.
Amazon must fix several issues before these models can be safely deployed in people’s homes. (Insider)
Stability AI’s text-to-animation model
The business that built the open-source text-to-picture model Stable Diffusion has introduced another tool to create animations using text, image, and video suggestions. These open-source technologies might empower creatives, despite copyright issues. It also delays open-source text-to-video. Stability AI
Culture warfare are engulfing AI—see the Hollywood writers’ strike.
The Writers Guild of America and Hollywood studios disagree on AI scriptwriting. The US culture-war brigade has entered the debate, predictably. Trolls are gloating that AI will replace striking authors. NY Mag
AI-generated Lord of the Rings trailer but do it Wes Anderson