
New AI widgets to improve technology around editing music and film are being released at such a rapid pace that it’s hard to track exactly what’s important and what’s just the latest tweak. But a few established tools recently made a quantum leap, of sorts.
For instance, you can talk directly to ChatGPT and get it to “write” a song with decent lyrics and music using only a few prompts. Sure, it’s derivative of many other songs, but the algorithms are designed to produce the most sought-after beats and sound. This YouTube video (youtube.com/shorts/z9vBJdhOiOk) gives an idea of how simple it is. Considering how mediocre most popular music sounds to me, it could be a hit!

In video production, Veo2 can generate entire films with very little direction from creators. This YouTube video (youtube.com/watch?v=n6sIJNBg52A) illustrates just how real it looks. Already a YouTube channel that I enjoy called The Why Files uses AI-generated video extensively to illustrate its stories, and some are incredibly well done. This particular WF link leads to a story on the potential of AI and quantum computing.
Ironically, a recent Nielsen survey of 6,000 people on AI in media found that 55% of audiences aren’t comfortable with it, according to a story in Forbes magazine. The article summarizes, “… without transparency, audiences are losing trust in brands, and multicultural audiences — who demand cultural authenticity — may disengage entirely.”

Native American audiences are particularly distrustful, with 56% distrusting brands that heavily use AI. Further, 55% of Black respondents were concerned about discrimination and bias in AI, for good reason. Early image programs routinely gave discriminatory descriptions of Black people.
Four of five people wanted media companies to inform them if AI is being used in an article, image or video.
While users like the tools for personal use, they reject the use of AI in media because of errors and hallucinations, as shown in the next news item.
Apple AI Generates Fake Headlines
AI news-editing programs still have a long way to go to simulate the skills of human editors, as demonstrated by the latest disaster for a news-providing entity, in this case Apple, according to a story on cnn.com.
The headlines were made-up interpretations of news stories the Apple Intelligence fed users who opted into the beta AI service. They included a summary of a BBC story stating that Luigi Mangione, who was charged with murdering UnitedHealthcare CEO Brian Thompson, had shot himself. Three New York Times articles, summarized in a single push notification, falsely stated that Israeli PM Benjamin Netanyahu had been arrested.
Additional flubs were the last straw: a summary of a Washington Post notification said that Pete Hegseth was fired, Trump tariffs had influenced inflation, and Pam Bondi and Marco Rubio were confirmed to the Cabinet. Apple pulled the AI for headlines after that.
The “intelligence” news service is the latest AI deployed without the benefit of extensive training. In essence, it’s built on ChatGPT and customized for Apple. Their version wasn’t sophisticated enough to suppress hallucinations, so it’s back to the drawing board.
MIT’s Top 5 Predictions for AI in 2025
TechnologyReview.com, MIT’s AI magazine, usually captures the most interesting trends in its specialty, and this year’s top five predictions are not what you might expect. Here’s what’s coming:
1 Generative virtual playgrounds are in the works by at least four companies, three of which will use them for on-the-fly video-game environments in the most interactive games imaginable. The other is being developed by World Labs, co-founded by Fei Fei Li (the mother of AI imaging), to develop the ability for machines to interpret and interact with the world, potentially making robots far more able to operate independently.
2 Large-language models that “reason,” with integrated agents that remember recent interactions with users, are becoming far more practical for human interaction. (Siri doesn’t get most of what you ask it, but imagine a tool that does.) Both OpenAI and Google are training agents on a slew of tasks.
3 AI scientific discoveries will accelerate. Last year AI protein-folding advancements won three scientists Nobel Prizes. MIT expects many more innovations to be recognized.
4 AI companies will work more closely with the US military. Not only will AI companies help advance defense technologies, but the Defense Department will provide classified military data to train models.
5 Nvidia chips will have competition. It’s been a long time coming, but several other companies are vying for the AI chip market, including Amazon, Broadcom, AMD and Grok, with additional innovations in their designs.
While these are mostly intriguing at this point, they will lead to more breakthroughs and practical developments that we probably can’t yet imagine. The primary concern is the use of AI by the military, but I’m going to try to be optimistic. If it’s like the space program, people could ultimately benefit more from the resulting knowledge.