TVG
TVG (formerly known as Brookes Vision Group at Oxford Brookes University), now in the Department of Engineering Science at the University of Oxford, was formed in 2005 and moved to the University of Oxford in 2013. It is led by Professor Philip Torr, FREng, FRS, who was made a Turing AI World-Leading Researcher Fellow in 2021. The group has won major awards at most of the top machine learning and computer vision conferences and has contributed to technology transfer into real-world applications, from autonomous cars to cybersecurity. We strongly believe that research should be inspired by applications that can make a positive difference in people's lives.
🌟 Torr's Law 🌟
"Any idea you have will appear on arXiv within two days."
Formal Statement: In a rapidly expanding research domain, the probability that a novel idea independently appears on arXiv approaches 1 as time from conception increases, with a characteristic lag of approximately 48 hours.
Weak Torr's Law: "If you don't write it down today, someone will publish it tomorrow."
Research Culture Commentary: "In frontier AI research, intellectual latency is about 48 hours — Torr's Law."
Recent News
- Prof. Torr has been awarded a 2025 Schmidt Sciences AI2050 Research Fellowship. Read more here.6 Nov 2025
2025 Schmidt Sciences AI2050 Research Fellowship
- TVG's undergraduate project won the Mind Foundry Prize for Best Project in Information Engineering. Check out the paper CLIP as RNN .7 Aug 2024
Mind Foundry Prize for Best Project
- Check it out at here and the consensus paper31 Oct 2023
Our response to the House of Lords Large Language Models Call for Evidence
- Prof. Torr will serve as the general chair for ICCV 20296 Oct 2023
ICCV 2029
- Congratulations to Prof. Torr for becoming a Distinguished Research Fellow in the Institute for Ethics in AI5 July 2023
Distinguished Research Fellow in the Institute for Ethics in AI
About TVG
Originally focused on computer vision, the group has branched out to other areas, as many deep learning techniques developed within computer vision can be applied more broadly.
Our current application areas include:
- AI Safety
As many of our advances move from theory to real-world deployment, we have become interested in the safe and reliable deployment of AI systems. TVG is well positioned to tackle this, as AI safety is easier to reason about when grounded in concrete use cases, combined with our experience in developing them. Key topics in AI safety include explainability, guardrails, red teaming, security, and robustness. Our work focuses on two main subtasks:
- Safety of Foundation Generative Models
This subtask centers on large foundational models, such as LLMs and VLMs. We develop methods to mitigate risks associated with their outputs, including preventing harmful or inappropriate content generation and safeguarding against hijacking to produce malicious outputs. To achieve this, we are advancing theoretically sound certification methods to provide guarantees against unsafe behaviors. Our deployment cases include projects such as fighting misinformation, where we are collaborating with the BBC on AI tools to process news: detecting deep fakes, identifying factual inaccuracies, and explaining the reasoning behind these detections.
- Safety of AI Agents
AI agents will soon conduct many routine tasks currently carried out by humans. Unlike the previous subtask—where the focus is on generation—this work targets (multi-)agentic systems that leverage LLMs or VLMs as core components for planning, reasoning, and task execution (e.g., controlling operating systems or interacting with web applications). We extend safety approaches for foundational models to address the unique challenges posed by these action-driven systems, ensuring their safe operation in real-world scenarios.
- Safety of Foundation Generative Models
- AI Scientist
We aim to close the loop of scientific discovery itself by building autonomous "AI Scientists" capable of accelerating progress across the natural, life, and social sciences. Rather than merely applying AI as a tool within existing scientific workflows, we are developing general-purpose AI systems that can formulate novel hypotheses, design and run experiments (in silico, wet-lab, or social-science settings), analyse results, and iteratively refine theories—essentially performing the full scientific method with minimal human supervision.
Current directions include:
- Drug discovery & therapeutics – AI-driven drug discovery, binding-affinity prediction, de-novo molecule generation, and automated retrosynthesis planning, with certified safety constraints to avoid toxic or off-target compounds.
- Materials discovery – Autonomous discovery of new catalysts, batteries, superconductors, and metamaterials by combining quantum-accurate simulations with active learning and experimental feedback loops.
- Generic AI Scientist – Framework-agnostic systems that can be dropped into any scientific domain, learn its literature and experimental protocols, propose high-value experiments, and update beliefs in a Bayesian manner. Recent prototypes have already discovered novel algorithms and mathematical conjectures.
- AI Social Scientist – Extending the same paradigm to economics, sociology, and political science: generating testable hypotheses about human and institutional behaviour, designing large-scale online experiments or analysing observational data at unprecedented scale, and surfacing policy-relevant insights while preserving privacy and ethical standards.