9 Comments

the truely misinformative information will never be addressed as it's too powerful for the powerful to let go.

Expand full comment

Newsom will use this to ban simple memes, even satirical impressionists! Rich Little would be in jail!

Expand full comment

Only the most mentally challenged Democrat could have mistaken the Kamala 'Shallow Fake' video with her own words. Besides, Kamala is already a parody of herself!

Expand full comment

Timely, comprehensive, and well expressed, thank you! I think Education holds the key.. to appropriate use of All machines including AI. Our education system must cultivate more of a Whole Human Being, the result of which is the finest of the Occidental-Rational Thought and the Finest of the Orient-Intuition.. someone able to smell a Rat like Bill Gates[purchased a large position in AI - 2yrs ago] and his dystopian WEF/WHO goals.. in REAL TIME a mile away.. to intuit Bio Weapons from a mile away.. A strong Meditation balances a strong clear Rational Mind.. making it possible to detect danger whenever and where ever one travels upon the Earth.. This includes any and every creation, distortion, abuse that Bill Gates of Hell's AI will ever produce..

Sit in silence and breathe, We Got This...

Expand full comment

This is very concerning.

Expand full comment

In my techno fiction book the Bel Algorithm I predicted this technology in 2018. The main antagonist in my book is a presidential candidate whose speeches etc in real time are being edited on video etc.

Expand full comment

This is related but a bit off topic. Maybe someone with a better grasp of AI can answer this question. If AI, to my understanding, learns by injesting enormous amounts of data from the digital world, and if increasing amounts of that data is now AI generated, then these systems could be absorbing faulty data generated by other AI models. A bit like feeding cows other rendered cow carcasses and then being surprised by the emergence of mad cow disease. Are really good AI's able to differentiate and ignore data generated by other really good AI's or are we headed to the ultimate GIGO AI universe where a perfect deep fake or slightly but perfectly altered speech or comment is percieved by other AI's to be true. if AI's can't tell altered from real then maybe we should avoid the future nightmare and just pull the plug now. Just asking, this is way outside my expertise.

Dick Minnis removingthecataract.substack.com

Expand full comment