OpenAI's ChatGPT, Google's Gemini, DeepSeek, and xAI's Grok are pushing Russian state propaganda from sanctioned entities-including citations from Russian state media, sites tied to Russian intelligence or pro-Kremlin narratives-when asked about the war against Ukraine, according to a new report. Researchers from the Institute of Strategic Dialogue (ISD) claim that Russian propaganda has targeted and exploited data voids -where searches for real-time data provide few results from legitimate sources-to promote false and misleading information.
There's the latent ickiness of its manufacturing process, given that the task of sorting and labeling this data has been outsourced and underappreciated. Lest we forget, there's also the risk of an AI oopsie, including all those accidental acts of plagiarism and hallucinated citations. Relying on these platforms seems to inch toward NPC status-and that's, to put it lightly, a bad vibe.
Artificial intelligence right now is a turbulent confluence of excitement and innovation in the tech world and trepidation and anxiety in society. Will AI take our jobs or will it usher in a utopia in which no one needs to work? Will AI blow up the planet or will it figure out how to power itself with nuclear fusion and reverse climate change? Is it too late to stop it now if we wanted to?
Replacement.AI appeared on the internet, and on billboards, in the last couple of weeks, with a website, a LinkedIn profile, a YouTube channel, and an Xitter account, the latter of which has been posting troll-y messages and retweets since September 25. One example: "AI can now tell people how to build bioweapons. However, we have made our users pinky promise that they won't use our AI model for nefarious purposes. Let's hope they keep their promise!"
This time, sporting a bit of a new look in a recent interview, Kojima has said he sees AI as a boon that can help cut out what he describes as "tedious" tasks, helping developers to lower costs and produce games faster. In an interview with Wired Japan ( h/t Dexerto), Kojima described "a future where [he stays] one step ahead; creating together with AI,"
Two weeks ago in this space, I wrote about Sora, OpenAI's new social network devoted wholly to generating and remixing 10-second synthetic videos. At the time of launch, the company said its guardrails prohibited the inclusion of living celebrities, but also declared that it didn't plan to police copyright violations unless owners explicitly opted out of granting permission. Consequently, the clips people shared were rife with familiar faces such as Pikachu and SpongeBob.
AI bots are everywhere now, filling everything from online stores to social media. But that sudden ubiquity could end up being a very bad thing, according to a new paper from Stanford University scientists who unleashedAI models into different environments - including social media - and found that when they were rewarded for success at tasks like boosting likes and other online engagement metrics,the bots increasingly engaged in unethical behavior like lyingand spreading hateful messages or misinformation.
Plagiarizing is looked at by many writers as the ultimate taboo, a complete and total incineration of the public trust between those who pen and those who consume what's penned. But what if those writings are written in the author's own style, but using a little help from a robot friend? Are we plagiarizing ourselves when artificial intelligence rears its confounding head to help us find our authentic voice?
It's easier for humans to be dishonest if they delegate their actions to a machine agent like ChatGPT, according to a new scientific study recently published in the journal Nature. Artificial intelligence (AI) acts as a kind of psychological cushion that reduces the sense of moral responsibility. People find it harder to lie or do something irresponsible if they have to take the lead. AI, and its willingness to comply with any request from its users, can lead to a wave of cheating.
The U.S. Federal Trade Commission (FTC) has opened an investigation into AI "companions" marketed to adolescents. The concern is not hypothetical. These systems are engineered to simulate intimacy, to build the illusion of friendship, and to create a kind of artificial confidant. When the target audience is teenagers, the risks multiply: dependency, manipulation, blurred boundaries between reality and simulation, and the exploitation of some of the most vulnerable minds in society.
In 1985, at the tender age of 22, I played against 32 chess computers at the same time in Hamburg, West Germany. Believe it or not, I beat all 32 of them. Those were the golden days for me. Computers were weak, and my hair was strong. Just 12 years later, in 1997, I was in New York City fighting for my chess life against just one machine: a $10 million IBM supercomputer nicknamed Deep Blue.
You should know that in this crazy, often upside-down word, no matter what, AI loves you. You should also know that the love AI offers is 100 percent a marketing strategy. As an inventor of one of the first AI platforms and a heavy user of the current crop, let me kick off this article by recklessly speculating that the makers of some of today's AI platforms want to be - in short - a single solution to all the world's problems.