TORR VISION GROUP

Torr Vision Group (formerly known as Brookes Vision Group at Oxford Brookes University), now in the Department of Engineering Science at the University of Oxford, was formed in 2005 and moved to the University of Oxford in 2013. It is led by Professor Philip Torr, FREng, FRS, who was made a Turing AI World-Leading Researcher Fellow in 2021. The group has won major awards at most of the top machine learning and computer vision conferences and has contributed to technology transfer into real-world applications, from autonomous cars to cybersecurity. We strongly believe that research should be inspired by applications that can make a positive difference in people's lives.

Recent News

  • 7 Aug 2024

    Mind Foundry Prize for Best Project

    TVG's undergraduate project won the Mind Foundry Prize for Best Project in Information Engineering. Check out the paper CLIP as RNN .
  • 31 Oct 2023

    Our response to the House of Lords Large Language Models Call for Evidence

    Check it out at here and the consensus paper
  • 6 Oct 2023

    ICCV 2029

    Prof. Torr will serve as the general chair for ICCV 2029
  • 5 July 2023

    Distinguished Research Fellow in the Institute for Ethics in AI

    Congratulations to Prof. Torr for becoming a Distinguished Research Fellow in the Institute for Ethics in AI

About TVG

Originally focused on computer vision, the group has branched out to other areas, as many deep learning techniques developed within computer vision can be applied more broadly.

Our current application areas include:

  1. AI Safety

    As many of our advances move from theory to real-world deployment, we have become interested in the safe and reliable deployment of AI systems. TVG is well positioned to tackle this, as AI safety is easier to reason about when grounded in concrete use cases, combined with our experience in developing them. Key topics in AI safety include explainability, guardrails, red teaming, security, and robustness. Our work focuses on two main subtasks:

    • Safety of Foundation Generative Models

      This subtask centers on large foundational models, such as LLMs and VLMs. We develop methods to mitigate risks associated with their outputs, including preventing harmful or inappropriate content generation and safeguarding against hijacking to produce malicious outputs. To achieve this, we are advancing theoretically sound certification methods to provide guarantees against unsafe behaviors. Our deployment cases include projects such as fighting misinformation, where we are collaborating with the BBC on AI tools to process news: detecting deep fakes, identifying factual inaccuracies, and explaining the reasoning behind these detections.

    • Safety of AI Agents

      AI agents will soon conduct many routine tasks currently carried out by humans. Unlike the previous subtask—where the focus is on generation—this work targets (multi-)agentic systems that leverage LLMs or VLMs as core components for planning, reasoning, and task execution (e.g., controlling operating systems or interacting with web applications). We extend safety approaches for foundational models to address the unique challenges posed by these action-driven systems, ensuring their safe operation in real-world scenarios.

  2. Drug Discovery

    We have established a strategic partnership with Novo Nordisk to develop AI methods that assist in drug discovery. In this domain, topics such as explainability are also critically important.