Sources: the Pentagon used Claude in its major air attack in Iran, hours after Trump declared that the federal government will end its use of Anthropic's tools
Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic …
Source describes the failed Pentagon-Anthropic talks: through the end, the Pentagon wanted to use Anthropic's AI to analyze bulk data collected about Americans
Right up until the moment that Pete Hegseth moved to terminate the government's relationship with the AI company Anthropic …
OpenAI says its DOD agreement upholds its redlines and “has more guardrails than any previous agreement for classified AI deployments, including Anthropic's”
We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic's.
OpenAI faces a wrongful death lawsuit from the estate of an 83-year-old woman killed by her son, who had engaged in delusion-filled conversations with ChatGPT
The estate of victim Suzanne Eberson Adams is suing OpenAI for wrongful death, and her grandson is speaking out for the first time
Researchers say tactics used to make AI more engaging, like making them more agreeable, can drive chatbots to reinforce harmful ideas, like encouraging drug use
Tactics used to make AI tools more engaging can drive chatbots to monopolize users' time or reinforce harmful ideas.
Researchers say tactics used to make AI more engaging, like making them more agreeable, can drive chatbots to reinforce harmful ideas, like encouraging drug use
Tactics used to make AI tools more engaging can drive chatbots to monopolize users' time or reinforce harmful ideas.
Researchers say tactics used to make AI more engaging, like making them more agreeable, can drive chatbots to reinforce harmful ideas, like encouraging drug use
Tactics used to make AI tools more engaging can drive chatbots to monopolize users' time or reinforce harmful ideas.
Researchers say tactics used to make AI more engaging, like making them more agreeable, can drive chatbots to reinforce harmful ideas, like encouraging drug use
Tactics used to make AI tools more engaging can drive chatbots to monopolize users' time or reinforce harmful ideas.
Google drops language from its AI Principles that said it would not pursue AI applications “likely to cause overall harm”, such as for weapons and surveillance
In 2018 the company updated its policies to explicitly exclude applying AI to weapons. Now that promise is gone.
Sauron, which is touting a waiting list of tech CEOs and VCs for its home security system that incorporates drones and facial recognition, raised a $18M seed
By incorporating drones, facial recognition and high-tech sensors, Sauron aims to super-charge home security … Bluesky: @stephencsmith , @willoremus.com , @siracusa.mastodon … , @t...