- MELLOW MOTIVE
- Posts
- AI Death Clock and Canadian Media Goes After OpenAI
AI Death Clock and Canadian Media Goes After OpenAI
Meta allows military access to their open-source AI models
AI Riddle:
I can be broken without being touched. I can be given and kept at the same time. I am fragile, yet I build trust.
What am I?
Today’s Motive:
⏰ AI death clock app claims to help predict the day you'll bite the dust.
🗞️ Which Canadian media companies file a lawsuit against OpenAI?
🦙 Meta AI allows access to its open-source AI models for the U.S. military.
🤑 Check out the latest AI deals below. 💸
🛠️ Get the scoop on the latest AI tools at a glance section.
AI TECH
⏳ AI Deathclock: The Controversial Tool Predicting Your Lifespan
Image created in Midjourney
⏰ The Death Clock app is an AI death calculator tool designed to estimate users' life expectancy and encourage healthier lifestyle choices. Developed by Brent Franson, the app utilizes artificial intelligence and data from over 1,200 scientific studies involving approximately 53 million participants to provide personalized longevity insights and life outcomes.
Summary:
Personalized Predictions: After users complete a questionnaire covering aspects like age, diet, exercise, stress levels, sleep patterns, and family history the app calculates an estimated date of death, life expectancy, biological age, and health score.
Customized Longevity Plan: Based on the collected data, Death Clock offers tailored recommendations to extend your lifespan. These suggestions may include a whole host of lifestyle changes to recommend well-being and longer life, but this feature comes with a monthly fee.
Health Data Integration: You can upload personal health documents, such as blood tests and genetic profiles, to enhance the accuracy of the app's assessments and recommendations.
Purpose and Motivation:
The primary goal of Death Clock is to motivate individuals to adopt healthier behaviors by providing a tangible estimate of their lifespan based on current health habits. By highlighting areas for improvement, such as daily calorie intake, the app encourages proactive health management, aligning with the concept of "Medicine 3.0," which emphasizes preventive care and personal wellness.
The Motive:
While the app aims to promote healthier living with personalized suggestions, users should be aware of privacy concerns when sharing sensitive health information. It's important to review the app's privacy policy to understand how personal data is managed and protected.
Also, many users complain about the broad aspects of health and the narrow health data implemented when using the app, and a monthly fee could be a cash grab. There is a plethora of health advice online for free, and an AI model will not be the best tool for death prediction or lifespan estimation.
But it could be fun to check out. Try it here. I’m personally not freaking out at the date it provided me. Not freaking out at all…
Today’s top AI tool at a Fraction of the price.
See oncely.com for more deals on AI tools.
OpenAI
📰Canadian Media Giants Take Legal Action Against OpenAI: What It Means for AI and Journalism?
Image created in Midjourney
📰 A coalition of Canadian news organizations has filed a lawsuit against OpenAI, the developer of ChatGPT, alleging unauthorized use of their content to train its AI models. The plaintiffs include CBC/Radio-Canada, Postmedia, Metroland, the Toronto Star, The Globe and Mail, and The Canadian Press.
Check out the full story here.
Summary:
Allegations of Copyright Infringement: The media companies claim that OpenAI has been "scraping large swaths of content" from their publications without obtaining permission or providing compensation. They argue that this practice undermines their investments in journalism and violates copyright laws.
OpenAI's Response: OpenAI asserts that its models are trained on publicly available data and that this approach is "grounded in fair use and related international copyright principles." The company also mentions collaborations with news publishers, offering attribution and links to their content in ChatGPT search, and providing options to opt-out.
Legal Demands: The plaintiffs are seeking damages and a permanent injunction to prevent OpenAI from using their material without consent.
The Motive:
This lawsuit is part of a broader trend of legal actions against AI companies concerning the use of copyrighted material for training AI systems. Similar cases have emerged in the United States, including lawsuits by The New York Times and other media outlets against OpenAI.
OpenAI has recently partnered with other media companies to train and allow the use of AI with their material, and the company is paying well for these artificial intelligence use cases.
It does raise a good question, if other U.S. media companies are getting paid to train AI models, is this a maneuver by Canadian media to find other profit streams?
AI Tools at a glance:
💻️ Loopple.com: Create your website in 30 seconds for Free with AI. No coding or design is needed—customize and launch your site fast.
🏢 Rivalsense.co/en: RivalSense AI connects to 80+ sources to deliver deep company insights that will help your business and career.
🎦 Melies.co: Transform your ideas into Hollywood style movies.
META AI
🦙 Meta Allows Military access to their open-source AI models.
🎖️ In November 2024, Meta Platforms Inc. announced it would provide U.S. government agencies and defense contractors access to its open-source Large Language Model, Llama, for national security applications. The company is partnering with copes like Booze Allen, Leidos, and Lockheed Martin to strengthen U.S. military objectives.
There have been many critical assertions against Meta regarding open-source AI models and responsible risk with overseas use of American AI.
Meta’s statement here.
Summary:
Response to Global Developments: This policy change comes in response to reports that Chinese research institutions linked to the People's Liberation Army had utilized Meta's publicly available Llama model to develop AI tools for potential military applications.
Policy Shift: Meta's decision to allow the U.S. military access to its Llama AI models represents a departure from its earlier stance against military applications. By granting access to U.S. defense agencies, Meta aims to ensure that its technology is used strategically to support national security interests.
National Security Applications: The Llama AI models are expected to enhance various national security operations, including streamlining complex logistics, tracking terrorist financing, and reinforcing cyber defenses with its generative AI models.
The Motive:
Meta's collaboration with the U.S. military could set a precedent for other tech companies to support defense applications with advanced AI, potentially accelerating defense research while promoting America's economic and security interests. However, this partnership also raises ethical considerations regarding the use of AI in military contexts and the global AI arms race.
How will Meta take the news if their models are used for autonomous weapons systems and other ethical concerns in warfare developments?
In summary, Meta's decision to open its Llama AI models to U.S. defense agencies signifies a notable intersection between advanced AI technologies and national security, reflecting the evolving role of tech companies in addressing complex global challenges.
Learn AI in 5 Minutes a Day
AI Tool Report is one of the fastest-growing and most respected newsletters in the world, with over 550,000 readers from companies like OpenAI, Nvidia, Meta, Microsoft, and more.
Our research team spends hundreds of hours a week summarizing the latest news, and finding you the best opportunities to save time and earn more using AI.
Hello wonderful readers. How many AI emails would you like a week? |
AI Riddle Answer: A promise.
Reply