Inclusive security using voice biometrics and Microsoft Identity

Inclusive security for the visually impaired, those with difficulty reading or understanding a language, or don’t have access to a dedicated personal device

WhoIAm-Logo

This is the fascinating use case for which our partner WhoIAM has been using our voice biometrics technology. This has just been featured in the latest edition of the Microsoft Azure Identity partner integration video that uses our speaker biometrics-based authentication to make identity security design more inclusive. We at Oxford Wave Research support this laudable goal all the way!

As Ajith Alexander,  head of product management at WhoIAM, writes in the Microsoft Azure AD identity blog:

ajith3

“Using voice biometrics for verification is also a powerful tool for implementing inclusive security. Human voices are readily available, can be recorded in a contactless way, and do not require specialized hardware. Our voice carries an imprint of our identity that comes through regardless of what we’re saying, what language we’re speaking, or where we’re speaking from. This makes voice biometrics an ideal choice for catering to users who are visually impaired, have difficulty reading or understanding a language, don’t have access to a dedicated personal device (residents at assisted-living communities, shift-workers), or live in less developed areas that rely on fixed phone lines. …Creatively solving for flexible, inclusive user verification ensures we can log in previously marginalized customers securely without identity verification being a frustrating experience.”

Ajith Alexander, Head of Product Management, WhoIAM

Security & Policing 2021- The Virtual one!

S & P 2021-01

Virtual Event, Virtual Stand, even Virtual Sweets, but the same real people!

The OWR team, Nikki, Ekrem, Oscar and Anil warmly welcome you to join us at the first ever virtual Security & Policing event taking place 9-11 March 2020.

This year’s event coincides with our 10 year anniversary and we are proud to be sharing with you our state-of-the-art desktop and ‘on-device’ speaker recognition and audio processing software in use at the forefront of the voice biometric field.

S+P2021-Email-signature-2-VISITUS

Oxford Wave Research PhD Studentship at Cambridge

Collaborative PhD studentship between Oxford Wave Research, the University of Cambridge, and the Cambridge Trust

OxfordWaveResearchFellowshipCambridge2020-2

 

We are delighted that Oxford Wave Research Ltd and the University of Cambridge, in collaboration with the Cambridge Trust, have established a new PhD studentship based at Selwyn College, Cambridge. The award enables a student to undertake a PhD in Theoretical and Applied Linguistics, commencing in October 2020. The studentship was open to UK and EU candidates of outstanding academic potential, and covers tuition fees and maintenance for three years.

The studentship is in the area of forensic phonetics, the application of phonetic analysis to criminal cases, often where the identity of a speaker is in question, either due to an incriminating recording (e.g. hoax call, ransom demand, telephone threat, etc.) or due to a witness having heard a speech event at a crime scene. Forensic phonetics uses both traditional phonetic and automatic (machine-based) techniques.

The PhD project aims to consider the relationships between traditional phonetic analyses and automatic speaker recognition (computer-based identification and recognition of the identity behind a voice). The studentship will include collaborative opportunities for the student to gain industry experience and to conduct research in conjunction with Oxford Wave Research, an audio processing and voice-biometrics company which specialises in developing solutions for law enforcement agencies in forensic voice comparison. The student’s research will consider both human and machine-based, algorithmic selection of different groups of speakers for various forensic analyses based on different criteria and the implication of the selections of these groups in the evaluation of the strength of evidence. These criteria include voice similarity perceived by human listeners and demographic features such as gender, language, age, regional accent. Further, the research will attempt to evaluate how the human or automatic, machine-based selection of databases can result in algorithmic bias.

“We are delighted to be working with a leading audio-processing and voice biometrics company that has such a strong track record of developing solutions in the forensic speech and audio arena. Cambridge has a well-established tradition of research excellence and innovation in forensic phonetics and the opportunity to bring automatic speaker recognition techniques to complement our acoustic-phonetic and perceptual approaches represents an exciting new line of investigation for our Phonetics Lab.”

Dr Kirsty McDougall, University of Cambridge

“This studentship overseen by Dr McDougall at the Phonetics Laboratory in Cambridge represents an incredible opportunity for us to formally collaborate with one of the best-regarded forensic phonetics research groups in the country, with an enduring legacy of fundamental and important research work. We look forward to the exciting research collaboration planned with the laboratory in this studentship that has important implications for how forensic casework involving speech is done in the future and which will help the legal system by providing timely, just and balanced analysis.”

Dr Anil Alexander, CEO of Oxford Wave Research
Current award-holder

The recipient of this studentship in 2020 is Ms Linda Gerlach. Linda obtained her undergraduate degree in Language and Communication at Philipps University Marburg, Germany, and went on to complete her masters degree in Speech Science with a focus on phonetics at the same university. For her masters thesis titled “A study on voice similarity ratings: humans versus machines”, she worked in collaboration with the University of Cambridge during an internship at Oxford Wave Research (2018-2019).

About University of Cambridge Phonetics Laboratory

The University of Cambridge Phonetics Laboratory is based in the university’s Theoretical and Applied Linguistics Section, Faculty of Modern and Medieval Languages, and accommodates a strong community of teaching and research staff, research students, a number of affiliated researchers in phonetics, and a lab manager. As well as hosting an extensive programme of research in forensic phonetics, the lab fosters research in phonetics and phonology across a diverse range of topics including speech production and perception, language acquisition, psycholinguistics, prosody, tone, sociophonetics, and language variation and change. Recent funded projects in forensic phonetics include DyViS, VoiceSim and IVIP.

About Oxford Wave Research

Oxford Wave Research (OWR) is a specialised audio R&D company with expertise in voice biometrics, speaker diarization, audio fingerprinting and audio enhancement. The OWR team have contributed to major government projects, nationally and internationally. OWR has been particularly successful in bringing practical applications of state-of-the-art academic research algorithms to usable commercial products for law enforcement, military and other agencies. OWR’s solutions are used by law enforcement and forensic laboratories across the world including the UK, Germany, Netherlands, France, Canada, Switzerland. OWR are the creators of the well-established forensic voice comparison system ‘VOCALISE‘, used in forensic audio labs across the world, as well as ‘WHISPERS’ which is a powerful networked ‘one to many’ voice comparison system.

Oxford Wave Research publications at ODYSSEY 2020

Two of our publications at the ODYSSEY 2020  Speaker and Language Recognition Workshop

Two of our collaborative papers, one on voice spoofing detection, and the other on the effects of device variability on forensic speaker comparison, are appearing at this week’s virtual ODYSSEY 2020 Speaker and Language Recognition Workshop. Video presentations for both papers are now available on the workshop website: http://www.odyssey2020.org/

The full papers, along with the rest of the conference proceedings, can be found at: https://www.isca-speech.org/archive/odyssey_2020/index.html

Bence1

In our paper with Bence Halpern (PhD student, University of Amsterdam), “Residual networks for resisting noise: analysis of an embeddings-based spoofing countermeasure,” we propose a new embeddings-based method of spoofed speech detection using Constant Q-Transform (CQT) features and a Dilated ResNet Deep Neural Network (DNN) architecture. The novel CQT-GMM-DNN approach, which uses the DNN embeddings with a Gaussian Mixed Model (GMM) classifier, performs favourably compared to the baseline system in both clean and noisy conditions. We also present some ‘explainable audio’ results, which provide insight into the information the DNN exploits for decision-making. This study shows that reliable detection of spoofed speech is increasingly possible, even in the presence of noise.

See a blog post from Bence (including some explainable audio examples) here: https://karkirowle.github.io/publication/odyssey-2020

FRIDA2

In our paper with David van der Vloed (from the Netherlands Forensic Institute), “Exploring the effects of device variability on forensic speaker comparison using VOCALISE and NFI-FRIDA, a forensically realistic database,” we investigate the effect of recording device mismatch on forensic speaker comparison with VOCALISE. Using the forensically-realistic NFI-FRIDA database, consisting of speech simultaneously-recorded on multiple devices (e.g. close-mic, far-mic, and telephone intercept, as seen in the data collection image), we demonstrate that while optimal performance is achieved by matching the relevant population recording device to the case data recording device, it is not necessary to match the precise device; broadly matching the device type is sufficient. This study presents a research methodology for how a forensic practitioner can corroborate their subjective judgment of the ‘representativeness’ of the relevant population in forensic speaker comparison casework.

Do face coverings affect identifying voices?

Vlog: Do face coverings affect identifying voices?
A small experiment using VOCALISE and PHONATE

In these recent months of 2020, like many others around the world, we have found ourselves adjusting to the new normal of wearing masks in various places like supermarkets and other public spaces. We found ourselves (minorly) annoyed that some biometric identification, like face recognition, doesn’t quite work when wearing masks. This made us wonder how well voice biometric solutions could work when speakers are wearing masks, and we decided to perform a small experiment to analyse this. 

Over the last few weeks, we have been performing some small-scale tests of our VOCALISE and PHONATE software against speech spoken from behind a mask. We have found our systems to be quite robust to masked speech – they are able to recognise speakers across different mask-wearing conditions well.

The video below explains our experiment and discusses our findings. We hope that you find it interesting! 

Speech Communication journal publication on voice similarity – joint work by Cambridge University and Oxford Wave Research

Exploring the relationship between voice similarity estimates by listeners and by an automatic speaker recognition system incorporating phonetic features

Linda1
Linda2

 

We are happy to announce that our latest paper has been accepted for publication in the prestigious ‘Speech Communication‘ journal. This represents joint work between Cambridge University’s  ‘Faculty of Modern and Medieval Languages and Linguistics’ and Oxford Wave Research (OWR). 

 

This paper is titled ‘Exploring the relationship between voice similarity estimates by listeners and by an automatic speaker recognition system incorporating phonetic features’  and is authored by Linda Gerlach (OWR, Cambridge), Dr Kirsty McDougall (Cambridge),  Dr Finnian Kelly (OWR), Dr  Anil Alexander (OWR), Prof. Francis Nolan (Cambridge).

Similar-sounding voices is of interest in many areas, be it for voice parades in a forensic setting, voice casting for film-dubbing or voice banking to save one’s voice for future synthesis in case of a degenerative disease. However, it is a very time-consuming and expensive task. With the aim of finding an objective method that could speed up the process, we considered an automatic approach to rate voice similarity and explored the relationship between voice similarity ratings made by a total of 106 human listeners – some of whom may have been you – and comparison scores produced by an i-vector-based automatic speaker recognition system that extracts perceptually-relevant phonetic features. Our results showed a significant positive correlation between human and machine, motivating us to continue our developments in this space.

The main highlights of this work are that human judgements of voice similarity are seen to correlate with automatic speaker recognition  assessments (using auto-phonetic features) (this trend was seen with both English and German speakers’ judgements of English voices). These automatic speaker recognition assessments therefore show potential for automatically selecting foil voices for voice parades.

This paper is based on Linda’s Gerlach’s master’s thesis work (University of Marburg, Germany) at Oxford Wave Research last year and uses the phonetic mode of VOCALISE speaker recognition software.

Linda Gerlach

The full paper is available for free download on the Journal’s webpage. Please check the following link for the full abstract and paper, available for free using this link before 19th November 2020:

https://authors.elsevier.com/a/1bqZu_3pyeDhKh

Our Spectrumview app features in BBC 4 documentary – Ocean Autopsy: The Secret Story of Our Seas

We were delighted to see our Spectrumview audio analysis app being used by Dr Helen Czerski , a renowned oceanographer and physicist from University College London, to explore a whole new acoustic world under the waves as part of the BBC 4 documentary – ‘Ocean Autopsy: The Secret Story of Our Seas’. Dr Czerski drops a hydrophone into the depths of the ocean and listens to, and visualises the sounds deep under the water using Spectrumview. In this excellent programme they explore how the sounds deep under the water (including man-made sounds) can affect marine life such as porpoises.

So the ocean surface is effectively a barrier for almost all sound, so we have no idea what’s going on down there, and it’s a different acoustic world. But you can listen in with the help of a little bit of technology.

Dr Helen Czerski, Oceanographer

SpectrumView - Frequency Analysis Software

 

Oxford Wave Research Appointed as Salient Sciences’ Exclusive Distributor in UK and Ireland

Kicking off the distributor appointment (left to right – Anil Alexander, CEO Oxford Wave Research, Jeff Hunter CTO, Video Products & Don Tunstall, General Manager Salient Sciences)

Oxford Wave Research Ltd. are pleased to announce our appointment as the exclusive distributor in the United Kingdom and the Republic of Ireland for Salient Sciences (legal name Digital Audio Corporation, known to many as “DAC”).

We are excited to have our colleagues at Oxford Wave Research now officially offering Salient Sciences’ products and services in the UK and Ireland. We have previously worked closely with them on several interesting projects; going forward, we anticipate an even closer collaboration to provide unique, innovative solutions to our shared base of audio and video forensics clients worldwide.

Donald Tunstall, General Manager, Salient Sciences:

We also have many years of experience working with the DAC hardware-based audio processing solutions, such as the MicroDAC, PCAP, and CARDINAL AudioLab systems.

OWR will now be taking over all sales and support in the UK and Ireland, with immediate effect, for the VideoFOCUS and CARDINAL MiniLab Suite products, including all maintenance contracts and support.

Watch this space for training course announcements from DAC in the UK in 2020.

Dr Ekrem Malkoç joins Oxford Wave Research

Dr Ekrem Malkoç is joining Oxford Wave Research as our Technical Sales Manager. He will be spearheading expansion of Oxford Wave Research’s forensic and commercial speech and audio processing products into new regions and markets.

Ekrem is a well-known expert in the field of forensic speech and audio processing, forensic image analysis as well as forensic linguistics. He has a PhD in forensic linguistics from Ankara University (Turkey), MSc and MA degrees in Criminalistics and European Criminology from the Ankara University and Katholieke University of Leuven (Belgium) respectively, and a bachelor’s degree in Electrical and Electronics Engineering. Ekrem worked in Turkish Gendarmerie till 2015 as a Colonel after having served as the manager of two regional Gendarmerie Forensic Laboratories.

You can read more about him here https://oxfordwaveresearch.com/about-us/

Oxford Wave Research in Turkey for IAFPA 2019

Last week (14-17 July 2019) some of the OWR team had the pleasure of attending the annual IAFPA (International Association for Forensic Phonetics and Acoustics) conference which was hosted this year in Istanbul, Turkey. 

It was a great opportunity for us to learn about the work of other members of the forensic phonetics and acoustics community from all around the world. One of the hot topics  IAFPA this year was cross-language speaker comparison (Croatian-Serbian, Czech-Persian and French-English to name a few) We were delighted to see how much of this and other research from Switzerland and the Netherlands made use of the capabilities of our forensic automatic speaker recognition software VOCALISE. 

The OWR team with the conference organiser
Burcu Önder Gürpinar 

We enjoyed every part of the conference but the highlight for us was undoubtedly our intern Linda’s poster winning the 2019 Best Student Poster award. As you can imagine, the team celebrated appropriately with Turkish beer. 

Linda presenting speaker profiling and speaker recognition using x-vectors (winner of best student poster award).

We also showcased our advances in the use of Deep Neural Network (DNN)s using x-vectors in automatic speaker comparison and speaker profiling, presented by Dr. Finnian Kelly, our Principal Research Scientist.

Finnian presenting our x-vector paper

Abstracts of our papers:
 
1. From i-vectors to x-vectors – a generational change in speaker recognition illustrated on the NFI-FRIDA database, Finnian Kelly, Anil Alexander, Oscar Forth and David van der Vloed, 14-17 July 2019, International Association of Forensic Phonetics and Acoustics (IAFPA) Conference, Istanbul, Turkey [download here]

2. The effect of background selection on the strength of evidence
David van der Vloed, Finnian Kelly and Anil Alexander, 14-17 July 2019, International Association of Forensic Phonetics and Acoustics (IAFPA) Conference, Istanbul, Turkey [download here]

3. One out of many: A sliding window approach to automatic speaker recognition with multi-speaker files Linda Gerlach, Finnian Kelly and Anil Alexander, 14-17 July 2019, International Association of Forensic Phonetics and Acoustics (IAFPA) Conference, Istanbul, Turkey [download here]

4. More than just identity: speaker recognition and speaker profiling using the GBR-ENG database, Linda Gerlach, Finnian Kelly and Anil Alexander 14-17 July 2019, International Association of Forensic Phonetics and Acoustics (IAFPA) Conference, Istanbul, Turkey (Winner of 2019 Best Student Paper award) [download here]

Special thanks to Burcu Önder Gürpinar for 4 fantastic days of forensics and we look forward to showing you what we have in store for IAFPA 2020.

1 2 3 4 5