Join Us for “Automatic Speaker Recognition: Getting Started for Research” Workshop

04-Collage-20241016

Date: Thursday 16th & Friday 17th January 2025
Location: University of York (In-person)

Are you a research student interested in forensic speech science or automatic speaker recognition? Then this hands-on workshop hosted in collaboration with the York Forensic Speech Science Team is the perfect opportunity to learn more about how automatic speaker recognition systems work, explore their applications in forensic casework, and get practical experience using our software VOCALISE.

Over the course of two days, you will gain insights into pre-processing, feature extraction, speaker modelling, scoring, and calibration in a mix of  theoretical and practical sessions. Additionally, you will be able to develop research questions and conduct tests relevant to your research.

Who is this for?

Current Masters or PhD students interested in automatic speaker recognition systems.

Sign up now to secure your spot! 
https://docs.google.com/forms/d/e/1FAIpQLSe-NTCaRWA88cjd3UpxQgW6daqMC38VIpl_A0bi_H4sImA-Sw/viewform

For more information: https://www.york.ac.uk/business/cpd/sector-specific-courses/automatic-speaker-recognition/

 

 

 

OWR’s audio deepfake technology aces the UK Home Office-led ‘Deepfake Detection Challenge’

image_123650291
 Oxford Wave Research’s audio deepfake detection technology excelled in the Home Office-led ‘Deepfake Detection Challenge’. We topped the competition leader board for audio in the challenge determining which elements of a digital asset are deepfake. The challenge, open to UK companies, universities and researchers, required processing millions of deepfake and real images, audio and video files to build and test deepfake detection technology solutions. The Home Office, The Alan Turing Institute and the Department for Science, Innovation and Technology created this challenge and sought to seek out ‘the best of the best’ in innovative solutions to tackle the current and emerging threats presented by the increasing use of deepfakes.
 
We competed in the audio category and were thrilled with our near perfect performance. Along with five other organisations, we were invited to showcase and discuss our solution at the Deepfake Detection Challenge Showcase event at the iconic Ministry of Sound venue in London in July. Our R&D team has been working on voice cloning and audio deepfake detection for several years now, as we believe recent developments in voice cloning will fundamentally impact fraud, terrorism, sexual exploitation, and political disinformation. We were delighted, therefore, that our solution was able to perform so well with the challenge data.
 
We had the privilege of meeting Baroness Jones of Whitchurch (Parliamentary Under-Secretary of State at the Department for Science) with whom we had a very engaging discussion and demonstration of our audio deepfake detection technology. We were very pleased to listen to talks from Rupert Shute (Home Office, Deputy Chief Scientific Adviser), Talitha Rowland (Director, Security & Online Harms, Department for Science, Innovation and Technology), and Professor Jennifer Rubin (Home Office, Chief Scientific Adviser), which highlighted the problems and the need for solutions to combat these threats.
 
We would like to thank Andrew Tyeloo and the Vivace team, the Department for Science, Innovation and Technology, The Alan Turing Institute, and all those who played a part in putting together this fantastic challenge.
 
For a detailed account of the showcase and the ground-breaking solutions presented, read the full blog here and here (LinkedIn).

 

Dr Anil Alexander speaking at the Royal Society’s “Science in the Interests of Justice” Conference

RSSocialShare
 Our CEO, Dr Anil Alexander is speaking at the Royal Society, one of the world’s most prestigious and oldest scientific academies, on the topic of voice recognition in forensic science. Dr Anil Alexander will be one of the invited speakers at the “Science in the Interests of Justice” conference, taking place on October 3-4, 2023. This is co-organised by the Royal Society and the National Academy of Sciences as part of their Science and Law programmes and brings together top scientists and legal experts from both sides of the Atlantic to explore the crucial role of science in the judicial system.
 
If you would like to be virtually part of this event, you can still secure a spot and attend online using the link below: 
https://royalsociety.org/science-events-and-lectures/2023/10/science-in-the-interests-of-justice/

 

 

Below is the abstract of the talk: 

Voice recognition is the process by which distinctive characteristics of an individual’s speech are used to identify or verify who they are. As lay listeners, humans recognise familiar voices intuitively in an everyday sense and may also find themselves being ‘earwitnesses’ to a crime, albeit rarely. When carried out by trained practitioners using specialised methodologies and tools, voice recognition, comparing often unknown speech samples, can play an important role in investigative and forensic contexts.

This talk will consider the landscape of forensic voice recognition, encompassing auditory analysis by trained listeners, acoustic-phonetic measurements of perceptually salient features, and automatic speaker recognition using signal processing and modelling algorithms that are statistical or based on deep neural networks. The Bayesian likelihood ratio framework will be critically examined as a means of evaluating the strength of evidence using any voice analysis methodology. The importance of validation of the prevalent and emerging approaches, to understand their limitations and to provide reliable and transparent reports to the courts, will be discussed.

Additionally, the varying acceptance of voice recognition evidence in different parts of the world will be explored. Anticipating the new challenges posed by machine-created spoofed speech, this talk also will reflect on the risks, mitigations and, more optimistically, emerging opportunities afforded by using both human- and machine-based analysis.

RoyalSociety web screenshot

Oxford Wave Research Team Ready for IAFPA 2023 Conference

image (1)_Cropped
 
Oxford Wave Research staff are very excited to be attending the upcoming International Association of Forensic Phonetics and Acoustics (IAFPA) Conference, organised this year by the Centre for Forensic Phonetics and Acoustics (CFPA) at the University of Zurich and the Zurich Forensic Science Institute (FOR). We will be presenting a number of papers on our most recent work in the field of voice biometrics and audio processing. Have a look at the list of the presentations co-authored by the OWR researchers in collaboration with distinguished academicians and forensic scientists below:
  • A convincing voice clone? Automatic voice similarity assessment for synthetic speech samples
    Linda GerlachFinnian Kelly, Kirsty McDougall, and Anil Alexander
  • PASS (Phonetic Assessment of Spoofed Speech): Towards a human-expert-based framework for spoofed speech detection
    Daniel Denian Lee, Kirsty McDougall, Finnian Kelly, and Anil Alexander
  • CON(gruence)-plots for assessing agreement between voice comparison systems
    Michael Jessen, Anil Alexander, Thomas Coy, Oscar Forth, and Finnian Kelly
  • Automatic Speaker Recognition: does dialect switching matter?
    Marlon Siewert, Linda Gerlach, Anil Alexander, Gea de Jong-Lendle, Alfred Lameli, and Roland Kehrein
  • Impact of the mismatches in long-term acoustic features upon different-speaker ASR scores
    Chenzi XU, Paul Foulkes, Philip Harrison, Vincent HughesPoppy Welch, Jessica Wormwald, Finnian Kelly, and David van der Vloed
  • Effects of vocal variation on the output of an automatic speaker recognition system
    Vincent Hughes, Jessica Wormwald, Paul Foulkes, Philip Harrison, Poppy Welch, Chenzi Xu, Finnian Kelly, and David van der Vloed
zurich_logo_v2

Kicking off VOCALISE training in style at the University of York

The PASR (Person-specific Automatic Speaker Recognition) research team at the University of York rocking the new Oxford Wave Research t-shirts during their training on the VOCALISE forensic speaker recognition system.
 
York VOCALISE training with tshirts

“We’re really pleased to have started the project and to be working with Oxford Wave Research. The VOCALISE software allows us to answer big questions around the use of automatic speaker recognition systems in new and exciting ways. We can’t wait to see what comes out of the work!”

Dr Vincent Hughes, Principal Investigator of the project

 “We were delighted to be able to spend time with the York University team recently, getting them started with VOCALISE and discussing their exciting plans for the project. They have a really experienced team and have lots of interesting ideas in the works – we are looking forward to seeing where it all goes!”

Dr Finnian Kelly, OWR Principal Research Scientist and Lead Scientific advisor on the project

 

Dr Amelia Gully joins Oxford Wave Research team!

Dr Amelia Gully
Oxford Wave Research are pleased to announce the appointment of Dr Amelia Gully as a Senior Research Scientist. Amelia joins us from the University of York forensic speech science group, where she remains a research associate. 

“I am delighted to be joining the team at Oxford Wave Research, where I can put my acoustics and signal processing experience to work addressing real problems for customers, and contribute to exciting technological developments in the field of audio forensics.”

Dr Amelia Gully

 
Amelia’s research to date has focused on the anatomical bases of speaker identity, and particularly how individual differences in vocal tract shape affect the speech signal. For this work she was awarded a British Academy Postdoctoral Fellowship. She holds a PhD in Electronic Engineering and an MSc in Digital Signal Processing, both from the University of York, as well as a BSc (Hons) in Audio Technology from the University of Salford. 
“I am excited to welcome Amelia to OWR – with her expertise in acoustics and signal processing, and enthusiasm for all-things audio, she will be a valuable addition to the research team!”
 
Dr Finnian Kelly, Principal Research Scientist
 Amelia joins us remotely from York where she lives with her partner and two rescue dogs. When not engaged in audio and speech research, she can be found playing video games or pottering around on her allotment. 

 

 Oxford Wave Research collaboration with the University of York and the Netherlands Forensic Institute in £1m ESRC Project

v2 Banner post for York Collaboration-March_16

Oxford Wave Research are delighted to be collaborators with the University of York and the Netherlands Forensic Institute in a recently awarded ESRC-funded project (£1,012,570) ‘Person-specific automatic speaker recognition: understanding the behaviour of individuals for applications of ASR’ (ES/W001241/1). This is a three year project running from 2022 to 2025 led by Dr Vincent Hughes (PI), Professor Paul Foulkes (CI) and Dr Philip Harrison in the Department of Language and Linguistic Science at the University of York. The project is due to start in summer 2022 and will run for 3 years. OWR will be providing our expertise and consultancy in automatic speaker recognition and our flagship VOCALISE forensic speaker recognition system. 

Automatic speaker recognition (ASR) software processes and analyses speech to inform decisions about whether two voices belong to the same or different individuals. Such technology is becoming an increasingly important part of our lives; used as a security measure when gaining access to personal accounts (e.g. banks), or as a means of tailoring content to a specific person on smart devices. Around the world, such systems are commonly used for investigative and forensic purposes, to analyse recordings of criminal voices where identity is unknown. 

The aim of this project is to systematically analyse the factors that make individuals easy or difficult to recognise within automatic speaker recognition systems. By understanding these factors, we can better predict which speakers are likely to be problematic, tailor systems to those individuals, and ultimately improve overall accuracy and performance. The project will use innovative methods and large-scale data, uniting expertise from linguistics, speech technology, and forensic speech analysis, from the academic, professional, and commercial sectors. This has been made possible via the University of York’s strong collaboration with Oxford Wave Research and two project partners including the Netherlands Forensic Institute (NFI).

The University of York and OWR teams are looking forward to a very fruitful collaboration that will undoubtedly further the state of the art in forensic speaker recognition.

Dr Vincent Hughes, Principal Investigator, University of York says “We are delighted to be working so closely on this project with Oxford Wave Research, who are world leaders in the field of automatic speaker recognition and speech technology. We hope that our research will deliver major benefits to the fields of speaker recognition and forensic speech science”.

Dr Anil Alexander, CEO, Oxford Wave Research says “Our team led by Dr Finnian Kelly is thrilled to contribute to this in-depth study of the individual specific-factors affecting speaker recognition, with the accomplished research team led by Dr Hughes from the University of York who are at the forefront of this space, and real-word end-users like the Netherlands Forensic Institute who have been driving research and innovation in this space for many years”.

 

Gamers use SpectrumView to uncover Fortnite and Minecraft’s secrets

Gamers use SpectrumView to uncover Fortnite and Minecraft’s secrets

Content creators of all kinds, such as the musician Aphex Twin, have long used hidden secret patterns and text in the audio that can be observed in their spectrograms. More recently, video game developers have hidden Easter eggs in the spectrograms of their game soundtracks for their more inquisitive players to find. For example, among Minecraft’s sound effects, the face of a Creeper, one of the game’s enemies, can be seen in the spectrogram of the audio heard in a cave, as SpectrumView user “Musix200” discovered. See if you can spot it too!

creeper-1817227

Taking this idea a step further, alternate reality games (ARGs) are a modern spin on the traditional scavenger hunt in which participants scour websites, social media, and videos looking for clues. These games have taken social media sites like YouTube and Reddit by storm over the last few years. Organisers, often video game developers, will bury information in all sorts of places, like images, website code, and audio. Whole communities have been formed in order to find out secret stories and previews for their favourite video game, or just to have some cooperative fun while solving a digital mystery.

Epic Games created ARG content in the run-up to the Season 5 release of their famous multiplayer online game, Fortnite Battle Royale. They staged a rocket launch within the video game itself, during which some of the audio played was slightly odd. Gamers quickly realised that there was probably more to the audio clip than what could be just heard. Looking for patterns within the audio led them to visualising the frequencies in the audio in a spectrogram.  One such example of using SpectrumView to analyse the audio clip by player “Rockin Thomas86” is shown in the video below.

On the spectrogram, you can see pixelated skulls at the start and end of the audio, and, in the middle, a list of letters and numbers. According to the Game Detectives Wiki, the skull shapes were shown on television screens within the game before the rocket launch, while the letters and numbers could be decoded as ASCII values to produce in-game coordinates. Some time after the rocket launch, dimensional rifts opened up at these coordinates, causing locations to appear and disappear on the game map. Players were primed to check the locations, having teased out the message hidden in the rocket launch audio.

Spectrum analysers like our iOS app SpectrumView can open up a whole new dimension of information in audio, and we are excited to see what more our users can find hidden away in the audio of all sorts of ARG content.

Our Spectrumview app features in BBC 4 documentary – Ocean Autopsy: The Secret Story of Our Seas

We were delighted to see our Spectrumview audio analysis app being used by Dr Helen Czerski , a renowned oceanographer and physicist from University College London, to explore a whole new acoustic world under the waves as part of the BBC 4 documentary – ‘Ocean Autopsy: The Secret Story of Our Seas’. Dr Czerski drops a hydrophone into the depths of the ocean and listens to, and visualises the sounds deep under the water using Spectrumview. In this excellent programme they explore how the sounds deep under the water (including man-made sounds) can affect marine life such as porpoises.

So the ocean surface is effectively a barrier for almost all sound, so we have no idea what’s going on down there, and it’s a different acoustic world. But you can listen in with the help of a little bit of technology.

Dr Helen Czerski, Oceanographer

SpectrumView - Frequency Analysis Software