/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Ian Hogarth

@soundboy
29 posts
2026-02-11
This is an incredibly ambitious UK company working to change the entire paradigm of inference compute. Go @jamesdacombe!
2026-02-11 View on X
Financial Times

Docs: London-based AI chip startup Olix, founded by James Dacombe, CEO of brain monitoring startup CoMind, raised $220M led by Hummingbird at a $1B+ valuation

London-based Olix targets development of AI chips that are faster and cheaper than Nvidia's  —  A 25-year-old British entrepreneur …

2025-11-26
Silicon Valley twitter pundits loved to diss Google last few years, but it was pretty obvious if you were paying attention they had a few critical resources 1/ Demis/GDM, 2/ TPUs, 3/ huge cash pile 4/ founder control - was a matter of time before things came together.
2025-11-26 View on X
Spyglass

Google is starting to bridge OpenAI's product moat, like with Gemini's “dynamic view” option, which converts a text answer into an interactive, visual output

here's why Anthropic and Gemini are pulling ahead Ludwig Makhyan / Search Engine Journal : ChatGPT Vs. Gemini Vs. Claude: What Are The Differences? Amina Ali / Daily Sabah : Reset ...

2025-11-25
Silicon Valley twitter pundits loved to diss Google last few years, but it was pretty obvious if you were paying attention they had a few critical resources 1/ Demis/GDM, 2/ TPUs, 3/ huge cash pile 4/ founder control - was a matter of time before things came together.
2025-11-25 View on X
Spyglass

Google is starting to bridge OpenAI's product moat, like with Gemini's “dynamic view” option, which converts a text answer into an interactive, visual output

I have a simple method for my own AI rankings: product delight.  That is, when I use the various services from the players in AI …

2025-01-28
Is the main thing standing between Huawei and Nvidia access to ASML machines?
2025-01-28 View on X
Bloomberg

Sam Altman says DeepSeek's R1 is an “impressive model, particularly around what they're able to deliver for the price” and OpenAI “will pull up some releases”

OpenAI Chief Executive Officer Sam Altman welcomed the debut of DeepSeek's R1 model in a post on X late on Monday.

Is the main thing standing between Huawei and Nvidia access to ASML machines?
2025-01-28 View on X
CNBC

Nvidia calls DeepSeek's work “an excellent AI advancement”, reiterating “inference requires significant numbers of Nvidia GPUs and high-performance networking”

Nvidia called DeepSeek's R1 model “an excellent AI advancement,” despite the Chinese startup's emergence causing …

2024-12-02
1/ I wrote an essay on what I think it'll take for Europe to build it's first trillion-dollar startup: [image]
2024-12-02 View on X
Financial Times

VC firm Plural's co-founder says Europe, which has yet to create a $1T company, must start supporting experienced founders and start giving “audacious capital”

Can Europe build its first trillion-dollar start-up? Mastodon: Chester Wisniewski / @chetwisniewski@securitycafe.ca : @Techmeme Or, I know this is me being a radical, perhaps we ju...

2024-11-17
“Life can only be understood backwards; but it must be lived forwards”
2024-11-17 View on X
Transformer

Emails from Musk's lawsuit against OpenAI show Brockman and Sutskever had concerns about Altman as early as 2017, Sam Altman considered an ICO, and more

Brockman and Sutsekever worried about Musk becoming a dictator, which feels ... relevant.  [image] X: @techemails : Elon Musk emails OpenAI cofounders September 20, 2017 [image] @t...

2024-11-16
“Life can only be understood backwards; but it must be lived forwards”
2024-11-16 View on X
Transformer

Emails from Musk's lawsuit against OpenAI show Brockman and Sutskever had concerns about Altman as early as 2017, Sam Altman considered an ICO, and more

New emails released as part of Elon Musk's lawsuit against OpenAI reveal that the company's fondness of drama is hardly new.

2024-05-21
1/ Really remarkable achievement announced at AI Seoul Summit today: leading companies spanning North America, Asia, Europe and Middle East agree safety commitments on development of AI
2024-05-21 View on X
Reuters

Google, Meta, Microsoft, OpenAI, Amazon, IBM, and 10 other companies commit to safe AI development at the AI Seoul Summit 2024, hosted by South Korea and the UK

Sixteen companies involved in AI including Alphabet's Google (GOOGL.O), Meta (META.O), Microsoft (MSFT.O) and OpenAI …

2/ If you scan the list of signatories you will see the list spans geographies, as well as approaches to developing AI - including champions of open and closed approaches to safe development of AI [image]
2024-05-21 View on X
Reuters

Google, Meta, Microsoft, OpenAI, Amazon, IBM, and 10 other companies commit to safe AI development at the AI Seoul Summit 2024, hosted by South Korea and the UK

Sixteen companies involved in AI including Alphabet's Google (GOOGL.O), Meta (META.O), Microsoft (MSFT.O) and OpenAI …

2024-05-12
1/ Today the UK's AI Safety Institute is open sourcing our safety evaluations platform. We call it “Inspect”: https://www.gov.uk/...
2024-05-12 View on X
TechCrunch

The UK government's new AI safety body releases Inspect, an evaluation tool for AI model capabilities, including models' core knowledge and ability to reason

The U.K. Safety Institute, the U.K.'s recently established AI safety body, has released a toolset designed to “strengthen AI safety” …

2024-04-22
Early research into AI agents & their ability to autonomously exploit one-day vulnerabilities: https://arxiv.org/.... Feels important to prepare for a world where cyber attacks get easier by investing now in enhanced cybersecurity.
2024-04-22 View on X
The Register

Researchers: when given 15 CVE descriptions, GPT-4 autonomously exploited 87% of the “one-day” vulnerabilities, compared to 0% for every other model tested

2024-04-21
Early research into AI agents & their ability to autonomously exploit one-day vulnerabilities: https://arxiv.org/.... Feels important to prepare for a world where cyber attacks get easier by investing now in enhanced cybersecurity.
2024-04-21 View on X
The Register

Researchers: when given 15 CVE descriptions, GPT-4 autonomously exploited 87% of the vulnerabilities, compared to 0% for every other model tested

While some other LLMs appear to flat-out suck  —  AI agents, which combine large language models with automation software …

2024-04-03
Very proud of the landmark agreement the US and UK have signed today around joint testing of frontier AI systems. Testament to an incredible team of civil servants at the AI Safety Institute: https://www.ft.com/... [image]
2024-04-03 View on X
Financial Times

The US and the UK sign an agreement on how to test and assess risks from emerging AI models, marking the first bilateral arrangement on AI safety in the world

Madhumita Murgia / Financial Times :

2024-04-02
Very proud of the landmark agreement the US and UK have signed today around joint testing of frontier AI systems. Testament to an incredible team of civil servants at the AI Safety Institute: https://www.ft.com/... [image]
2024-04-02 View on X
Financial Times

The US and the UK sign an agreement on how to test and assess risks from emerging AI models, marking the first bilateral arrangement on AI safety in the world

Allies reach world's first bilateral deal as global governments seek to assess and regulate risks from emerging technology

2023-11-03
1/ I've just left the final session of the first ever global Summit on AI Safety, chaired by @RishiSunak and @michelledonelan. A thread on how it started vs how it's going: [image]
2023-11-03 View on X
Financial Times

Meta, OpenAI, and others sign a non-binding document to let the US, the UK, and other nations test AI models for national security risks prior to their release

Financial Times :

2023-06-19
5/ And at a pivotal moment, @RishiSunak has stepped up and is playing a global leadership role. He has pledged £100m on AI safety, the largest amount ever committed to this field by a nation state.
2023-06-19 View on X
Reuters

The UK names Ian Hogarth to lead its AI Foundation Model Taskforce; Hogarth co-founded concert discovery service Songkick, which Warner Music acquired in 2017

some thoughts on the UK's Foundation Model Taskforce and regulation by Twitter Ryan Morrison / Tech Monitor : Investor and entrepreneur Ian Hogarth to lead UK AI Taskforce Robert S...

I'm honoured to be appointed as the Chair of the UK's AI Foundation Model Taskforce. A thread on why I'm doing this and how you might be able to help us. https://twitter.com/...
2023-06-19 View on X
Reuters

The UK names Ian Hogarth to lead its AI Foundation Model Taskforce; Hogarth co-founded concert discovery service Songkick, which Warner Music acquired in 2017

some thoughts on the UK's Foundation Model Taskforce and regulation by Twitter Ryan Morrison / Tech Monitor : Investor and entrepreneur Ian Hogarth to lead UK AI Taskforce Robert S...

2023-05-31
Great see the leaders of Anthropic, DeepMind, OpenAI and others publicly acknowledging that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”. Bravo @DanHendrycks and @DavidSKrueger https://twitter.com/...
2023-05-31 View on X
New York Times

OpenAI and DeepMind executives, Geoffrey Hinton, and 350+ others sign a statement saying “mitigating the risk of extinction from AI should be a global priority”

and says computer scientists need ethics training Brian Fung / CNN : AI industry and researchers sign statement warning of ‘extinction’ risk Alka Jain / Livemint : Industry leaders...

2023-05-23
OpenAI describing a CERN-like project that major labs like Anthropic, DeepMind and OpenAI could combine into: “major governments around the world could set up a project that many current efforts become part of” https://openai.com/... [image]
2023-05-23 View on X
TechCrunch

OpenAI CEO Sam Altman, President Greg Brockman, and Chief Scientist Ilya Sutskever say the world will likely need a regulatory body for superintelligence

Now is a good time to start thinking about the governance … Stephen E. Arnold / Beyond Search : Please, World, Please, Regulate AI. Oh, Come Now, You Silly Goose Hasan Chowdhury / ...