Originally focused on computer vision, the group has branched out to other areas, as many deep learning techniques developed within computer vision can be applied more broadly.
Our current application areas include:
As many of our advances move from theory to real-world deployment, we have become interested in the safe and reliable deployment of AI systems. TVG is well positioned to tackle this, as AI safety is easier to reason about when grounded in concrete use cases, combined with our experience in developing them. Key topics in AI safety include explainability, guardrails, red teaming, security, and robustness. Our work focuses on two main subtasks:
This subtask centers on large foundational models, such as LLMs and VLMs. We develop methods to mitigate risks associated with their outputs, including preventing harmful or inappropriate content generation and safeguarding against hijacking to produce malicious outputs. To achieve this, we are advancing theoretically sound certification methods to provide guarantees against unsafe behaviors. Our deployment cases include projects such as fighting misinformation, where we are collaborating with the BBC on AI tools to process news: detecting deep fakes, identifying factual inaccuracies, and explaining the reasoning behind these detections.
AI agents will soon conduct many routine tasks currently carried out by humans. Unlike the previous subtask—where the focus is on generation—this work targets (multi-)agentic systems that leverage LLMs or VLMs as core components for planning, reasoning, and task execution (e.g., controlling operating systems or interacting with web applications). We extend safety approaches for foundational models to address the unique challenges posed by these action-driven systems, ensuring their safe operation in real-world scenarios.
We have established a strategic partnership with Novo Nordisk to develop AI methods that assist in drug discovery. In this domain, topics such as explainability are also critically important.