• MELLOW MOTIVE
  • Posts
  • Pentagon's AI Deepfake Project: What You Need to Know.

Pentagon's AI Deepfake Project: What You Need to Know.

Elon Musk reveals the Tesla Cybercab, and Parents sue school for AI punishment.

AI Riddle:

I have lakes with no water,
mountains with no stone,
and cities with no buildings.

What am I?

Today’s Motive:

  • đź—˝ The Pentagon is creating an AI Deepfake Project. Why?

  • đźš— Elon Musk reveals the CyberCab, but not everyone is convinced.

  • 🏫 Parents of cheating child are suing school over bad letter grade and AI use.

🤑 Checkout the latest AI deals below. đź’¸ 

🛠️ Get the scoop on the latest AI tools at glance section.

AI NEWS

đź—˝ Pentagon's AI Deepfake Project: What You Need to Know.

image created in Midjourney

 âś¨ The U.S. Department of Defense is advancing the development of AI-generated deepfake technology for deployment on social media platforms.
A procured document reviewed by The Intercept reveals that the Special Operations Command (SOCOM) is seeking companies capable of creating virtual personas so convincing they are indistinguishable from real users by both humans and detection algorithms.
Full story on The Intercept, written by Sam Biddle.

Summary:

  • These deepfake profiles would include synthetic selfies accompanied by matching artificial backgrounds, crafting a virtual environment imperceptible to social media algorithms.

  • SOCOM has an interest in software akin to StyleGAN, a tool released by Nvidia that is utilized on the website “This Person Does Not Exist.”

  • In 2022, Meta and Twitter removed a propaganda network using fake accounts operated by U.S. Central Command, including some with profile pictures generated with methods similar to those outlined by JSOC.

 Why do this? Special operations are slated to use this tech to garner information on public online forums.

However, this initiative raises ethical concerns, as national security officials have previously identified state-sponsored deepfakes as a significant threat—particularly when utilized by adversarial nations.

Critics argue that the U.S. adoption of offensive deepfake technologies may encourage global proliferation and normalization among other governments.

The Motive:

For the past couple of years, U.S. government officials have heavily warned against the use of and propagation of deepfakes, even some officials insinuating that deepfakes are a national security threat.

If the proliferation of deepfakes by public and governments continue, and the public is increasingly realizing such use, then what will the outcome be?
Why use social platforms if you cannot trust what you read and see if your being coerced by governments?

Today’s top AI tool at a Fraction of the price.

image provided from oncely.com

See oncely.com for more deals on AI tools.

AI INNOVATION

đźš— Elon Musk Shows Off His Robotaxi with Skepticism Attached.

image from Tesla

 âšˇď¸Ź Elon Musk has jumped in the autonomous vehicle industry with both feet introducing the Cybercab and Cybervan, affordable options that have garnered critical scrutiny.

The Cybercab represents a significant shift in Tesla’s focus from mass-market electric cars to AI-driven autonomous vehicles. Various investors are now scratching their heads in which direction Elon is guiding Tesla.
Full article here from Reuters.com.

Summary:

  • The two vehicles are the Cybercab, slated to be priced below US$30,000 with production expected before 2027, and the Cybervan, a larger model capable of transporting up to 20 passengers.

  • The Cybercab relies on AI and cameras for navigation, rather than hardware such as lidar and radar, which are commonly used by Tesla’s competitors in the autonomous vehicle sector.

The Criticism:

Tesla’s approach relies exclusively on AI and camera-based navigation, combined with end-to-end machine learning algorithms that instantly convert visual data into driving decisions, and deliberately exclude hardware like lidar and radar commonly used by competitors. A huge cost-cutting measure.

In contrast, competitors such as Waymo, Amazon’s Zoox, General Motors’ Cruise, and several Chinese firms utilize similar AI technologies but augment them with redundant systems—including radar, lidar, and advanced mapping—to enhance safety and meet regulatory standards for driverless vehicles.

Without the layered technologies used by its peers, Tesla’s system may be hindered by so-called “edge cases” — scenarios that self-driving systems and their human engineers struggle to expect, and the end-to-end AI model functions as a “black box,” making it nearly impossible to diagnose failures when accidents occur.

Skepticism persists regarding Tesla’s ability to achieve true autonomous driving within the timelines proposed by Musk. Many believe Tesla is already far behind its competitors.

The Motive:

Elon is stepping into an already crowded market, and companies such as Waymo, are on the streets in many U.S. cities.
Earlier in the year, Elon stated he will increase company focus on Tesla’s EV production.

However, subsequent abrupt cost-cutting measures—including mass layoffs—suggest a diversion of investment away from essential EV manufacturing priorities such as battery development, gigacasting, and the expansion of Tesla’s Supercharger network.

AI Tools at a glance:

AI NEWS

 đźŹ« Parents Sue School Over AI Use in Student Punishment: Legal Battle Unfolds.

Image created in Midjourney

 đź§‘‍⚖️ A Massachusetts school district is facing a lawsuit from the parents of a student who was disciplined for using an artificial intelligence (AI) chatbot to complete an assignment.

The lawsuit alleges violations of student’s personal and property rights and liberty to acquire, possess, maintain, and protect rights to equal educational opportunity.
Full article at arstechnica.com, here.

Summary:

 The student admitted to using an AI tool to generate ideas, create portions of his notes and scripts without proper citation and approval in the project he submitted.

School officials are claiming to have given guidelines around AI usage in their school. They note that in the fall of 2023, students were provided with a policy on Academic Dishonesty and AI expectations, which stipulated that students shall not use AI tools without permission. The policy was provided in the fall and the incident happened in December.

The parents claim their child did not take someone else’s work or ideas and pass them off as his own. Students “used AI, which generates and synthesizes new information.”

The plaintiffs argue the teacher and principal exceeded the authority granted to them and abused this authority, and stated the low mark on the paper would hinder the student’s future potential for application in an ivy league school.

The school maintains that the discipline was appropriate and aligned with school guidelines. They highlight that there was no suspension or expulsion, and that the family’s concern revolves around a poor letter grade.

It should be noted, the student did not receive a zero percent mark on his project, just a low mark, lowering his grade point average.

The Motive: 

Typically, if you get caught cheating on an exam or paper in school, or do not use proper attribution, you get in trouble, but not in this kid’s house. You get your parents to sue the school to get the letter grade you want.

The parents and their lawyer use the line that AI generated text is completely original and independent from credited works, but that is not the case.
These AI chatbots compile information from the internet to generate responses and do not produce novel ideas autonomously. Here is a great article highlighting research conducted by Apple on how these LLMs operate. 
That is why numerous companies are suing or have partnered with AI developers to regulate and use proprietary information.

AI Riddle Answer: A map.

Mellow Motive

Want to give a shoutout to Mellow Motive, or send us your feedback. Hit us up at [email protected]. Have a wonderful day.

Reply

or to participate.