- MELLOW MOTIVE
- Posts
- Apple Intelligence Release Update and OpenAI Agreement with U.S. Gov
Apple Intelligence Release Update and OpenAI Agreement with U.S. Gov
Plus, EU's new AI Act in effect
Today’s Motive:
📱 New updates on Apple Intelligence release.
🔈️ Sam Altman shares new insights between OpenAI and U.S. government.
⭐️ EU’s AI Act is now in effect. What are the details?
AI News
📱 Apple Intelligence Release Date: When Will AI Features Arrive on iPhone?
Apple Intelligence
The upcoming release of Apple Intelligence is reported to be delayed and is expected to miss the upcoming software overhauls for Apple products. Full article from reuters.com, link here.
One rumor suggests that Apple Intelligence will be made available for testing to software developers within the next coming days.
In late June, Apple announced a delay in AI features for Europe, citing new EU tech regulations—a trend observed among many tech companies.
Apple plans to start introducing Apple Intelligence through software updates in October. The AI features are set to be released a few weeks after the launch of iOS 18 and iPadOS 18, which is scheduled for September.
Apple Intelligence will be compatible with the iPhone 15 Pro, iPhone 15 Pro Max, and iPad and Mac products using the M1 chip.
The Motive:
Apple is known for releasing its products without significant technical issues, but with shrinking sales, will Apple miss out on the AI hype? Typically, one delay often leads to another, and Apple is jumping into the AI game feet first with Apple Intelligence.
Category
📰 OpenAI Partners with U.S. AI Safety Institute: What Does This Mean For OpenAI?
OpenAI waiting for U.S. Congress
📧 In a statement on X, Sam Altman announced that OpenAI will work with the U.S. AI Safety Institute to help ensure AI safety, providing early access to their next big AI model.
a few quick updates about safety at openai:
as we said last july, we’re committed to allocating at least 20% of the computing resources to safety efforts across the entire company.
our team has been working with the US AI Safety Institute on an agreement where we would provide… x.com/i/web/status/1…
— Sam Altman (@sama)
4:34 AM • Aug 1, 2024
This move is unsurprising given the previous turmoil inside of OpenAI and the ongoing list of public concerns. Plus the back and forth between the U.S. government and the company.
OpenAI’s safety came to light earlier this year with whistleblowers and employees claiming the company prioritizes creating flashy products over AI safety.
Sam Altman also shared that OpenAI has voided non-disparagement terms for current and former employees, allowing them to voice their concerns publicly.
The Motive:
Drama, drama, and more drama. There is always something to write about with OpenAI. The company has observed nothing but backlash and allegations since the beginning of this year. At least, it appears OpenAI is trying to shed its old skin and take a new path.
Category
📰 What Are The European Union's New AI Laws
EU’s new AI act
🧑⚖️ The European Union's risk-based regulation for artificial intelligence applications took effect on August 1st, bringing clarity and a slew of new laws. The laws will be implemented in stages, with the final regulations coming into effect in 2026. Full report from European Commission here.
The act classifies certain activities as Unacceptable risk to minimal risk. Most low-risk AI activities will not be regulated under the new EU laws act.
The EU has introduced a risk pyramid: Unacceptable risk, high risk, limited risk, and minimal risk.
Unacceptable risks include AI systems that are a clear threat to the safety, livelihoods, and rights of people. The EU used an example of toys with voice assistance encouraging dangerous behavior.
High-risk AI include applications in robot-assisted surgery, education, critical infrastructure like transportation, CV-sorting software for recruitment, credit scoring, and AI solutions for searching court rulings.
Limited risk includes AI transparency obligations. People should be informed if they are communicating with AI chatbots, ensure AI generated content is identifiable, and AI-generated text or video intended to inform the public must be labeled as AI to curb online deepfakes.
Minimal risk includes AI content such as AI-enabled video games or spam filters.
The Motive:
The EU has been a hotbed for turmoil when it comes to regulating AI. Most big tech companies have turned their backs on the EU, stating a lack of clarity and misregulation.
The rollout of new regulations comes just a week after Meta announced they will no longer continue AI training within the EU and will withhold deployment of new models within the EU.
Will the new acts be enough to bring big tech, and does the EU even care to have them back? Only time will tell.
Reply