f

Language Recognition Evaluation (LRE)

Summary
The Language Recognition Evaluation (LRE) is a language detection challenge to measure how well systems can automatically detect a target language given the test segment. The LRE is an ongoing series of evaluations conducted by NIST since 1996.
Task
  • Language Detection: given a segment of speech and a target language, the task is to automatically determine whether the target language was spoken in the test audio segment. The system will be presented segments that nominally contain between 3s and 30s of speech (as determined by an automatic speech activity detector). Each segment contains one of the target languages, and, for each segment, the system must output a log-likelihood score for each of the target languages, with higher values indicating the segment is more likely to contain that language.
  • Please refer to the evaluation plan below for the detailed tasks and relevant metrics.

Evaluation Plan
Who
LRE is open worldwide; we invite all organizations to submit their system results to the leaderboards. The challenge provides a set of data (e.g., training, development, test sets) to participants to train and run a system on their own hardware platform and submit their system outputs to a web-based leaderboard.
How
To take part in the LRE challenge you need to register on this website and complete the data license to download the data. Once your system is functional, you will be able to upload your system output along with your system description to the challenge website. Please refer to Instructions for the details.
Contact
If you have any question, please email to the LRE team: lre_poc@nist.gov
Acknowledgement
NIST has received support from other U.S. government agencies, such as Department of Defense, Department of Justice, and Intelligence Advanced Research Projects Activity (IARPA), to build a forum for the advancement of language recognition technology through the NIST LRE series.
2022 LRE and Workshop Schedule

  • Evaluation Plan Published: August 31, 2022
  • Registration Period: September - October 2022
  • Training & Development Data Available: September, 2022
  • Test Data Available to Participants: October 17, 2022
  • Submission Deadline for Fixed Training: November 18 (5PM EST), 2022
  • Submission Deadline for Open Training: December 2 (5PM EST), 2022
  • Preliminary Results Released: December 16, 2022
  • Post Evaluation Workshop (Virtual): January 31, 2023
2022 LRE Data

Coming Soon...

LRE/SRE Paper & Data Citations


NIST LRE Citation Link
LRE17 Sadjadi, Seyed Omid, Timothee Kheyrkhah, Craig S. Greenberg, Elliot Singer, Douglas A. Reynolds, Lisa P. Mason, and Jaime Hernandez-Cordero. "Performance Analysis of the 2017 NIST Language Recognition Evaluation." In Interspeech, pp. 1798-1802. 2018. 10.21437/Interspeech.2018-69
LRE17 S. O. Sadjadi, T. Kheyrkhah, A. Tong, C. S. Greenberg, D. A. Reynolds, E. Singer, L. P. Mason, and J. Hernandez-Cordero, “The 2017 NIST language recognition evaluation,” in Proc. Odyssey, Les Sables d´ Olonne, France, June 2018, pp. 82–89 10.21437/Odyssey.2018-12
LRE15 H. Zhao, D. Bans´e, G. Doddington, C. Greenberg, J. Hern´andez-Cordero, J. Howard, L. Mason, A. Martin, D. Reynolds, E. Singer, and A. Tong, “Results of the 2015 NIST language recognition evaluation,” in Interspeech 2016, San Francisco, USA, September 2016, pp. 3206–3210 10.21437/Interspeech.2016-169
LRE96, LRE03, LRE05, LRE07, LRE09, LRE11 A. F. Martin, C. S. Greenberg, J. M. Howard, G. R. Doddington, and J. J. Godfrey, “NIST language recognition evaluation - past and future,” in Odyssey 2014, Joensuu, Finland, June 2014, pp. 145–151 10.21437/Odyssey.2014-23


NIST SRE Citation Link
SRE21 Sadjadi, S.O., Greenberg, C., Singer, E., Mason, L., Reynolds, D. (2022) The 2021 NIST Speaker Recognition Evaluation. Proc. The Speaker and Language Recognition Workshop (Odyssey 2022), 322-329 10.21437/Odyssey.2022-45
CTS Challenge Sadjadi, S.O., Greenberg, C., Singer, E., Mason, L., Reynolds, D. (2022) The NIST CTS Speaker Recognition Challenge. Proc. The Speaker and Language Recognition Workshop (Odyssey 2022), 314-321 10.21437/Odyssey.2022-44
SRE19 O. Sadjadi, C. Greenberg, E. Singer, D. Reynolds, L. Mason, and J. Hernandez-Cordero, “The 2019 NIST Audio-Visual Speaker Recognition Evaluation,” in Proc. The Speaker and Language Recognition Workshop (Odyssey 2020), 2020, pp. 259–265 10.21437/Odyssey.2020-37
SRE19 CTS Challenge S. O. Sadjadi, C. Greenberg, E. Singer, D. Reynolds, L. Mason, and J. Hernandez-Cordero, “The 2019 NIST Speaker Recognition Evaluation CTS Challenge,” in Proc. The Speaker and Language Recognition Workshop (Odyssey 2020), 2020, pp. 266–272 10.21437/Odyssey.2020-38
SRE18 S. O. Sadjadi, C. S. Greenberg, E. Singer, D. A. Reynolds, L. P. Mason, and J. Hernandez-Cordero, “The 2018 NIST speaker recognition evaluation,” in Proc. INTERSPEECH, Graz, Austria, September 2019, pp. 1483–1487 10.21437/Interspeech.2019-1351
SRE16 S. O. Sadjadi, T. Kheyrkhah, A. Tong, C. S. Greenberg, D. A. Reynolds, E. Singer, L. P. Mason, and J. Hernandez-Cordero, “The 2016 NIST speaker recognition evaluation,” in Proc. INTERSPEECH, Stockholm, Sweden, August 2017, pp. 1353–1357 10.21437/Interspeech.2017-458
SRE96 - SRE06, SRE08, SRE10, SRE12 C. S. Greenberg, L. P. Mason, S. O. Sadjadi, and D. A. Reynolds, “Two decades of speaker recognition evaluation at the National Institute of Standards and Technology,” Computer Speech & Language, vol. 60, 2020 10.1016/j.csl.2019.101032


NIST ivec Citation Link
ivec15 A. Tong, C. Greenberg, A. Martin, D. Banse, J. Howard, H. Zhao,G. Doddington, D. Garcia-Romero, A. McCree, D. Reynolds, E. Singer,J. Hernandez-Cordero, and L. Mason, “Summary of the 2015 NIST language recognition i-vector machine learning challenge,” in Odyssey 2016:The Speaker and Language Recognition Workshop, Bilbao, Spain, June 21-24 2016, pp. 297–302 10.21437/Odyssey.2016-43
ivec14 D. Banse, G. R. Doddington, D. Garcia-Romero, J. J. Godfrey, C. S. Green-berg, A. F. Martin, A. McCree, M. A. Przybocki, and D. A. Reynolds,“Summary and initial results of the 2013-2014 speaker recognition i-vectormachine learning challenge,” inProc. INTERSPEECH, Singapore, Singa-pore, September 2014, pp. 368–372 10.21437/Interspeech.2014-86


NIST Misc Citation Link
LRE Homepage NIST Language Recognition Evaluation nist.gov/itl/iad/mig/language-recognition
SRE Homepage NIST Speaker Recognition Evaluation nist.gov/itl/iad/mig/speaker-recognition
Normalized Cross-Entropy paper A tutorial introduction to the ideas behind Normalized Cross-Entropy and the information-theoretic idea of Entropy nist.gov/file/411831
SPHERE sw Speech file manipulation software (SPHERE) package version 2.7, 2012 nist.gov/itl/iad/mig/tools
Babel data M. P. Harper, "Data resources to support the Babel program," https://goo.gl/9aq958
DET curves A. F. Martin, G. R. Doddington, T. Kamm, M. Ordowski, and M. A.Przybocki, "The DET curve in assessment of detection task performance," inProc. EUROSPEECH, Rhodes, Greece, September 1997, pp. 1899–1903 10.21437/Eurospeech.1997-504


LDC Data Citation Link
SWB-1, rel2 J. Godfrey and E.Holliman, "Switchboard-1 Release 2," 1993 catalog.ldc.upenn.edu/LDC97S62
SWB-2, Pt1 D. Graff, A. Canavan, and G. Zipperlen, "Switchboard-2 Phase I," 1998 catalog.ldc.upenn.edu/LDC98S75
SWB-2, Pt2 D. Graff, K. Walker, and A. Canavan, "Switchboard-2 Phase II," 1999 catalog.ldc.upenn.edu/LDC99S79
SWB-2, Pt3 D. Graff, D. Miller, and K. Walker, "Switchboard-2 Phase III," 2002 catalog.ldc.upenn.edu/LDC2002S06
SWBCell, Pt1 D. Graff, K. Walker, and D. Miller, "Switchboard Cellular Part 1 Audio," 2001 catalog.ldc.upenn.edu/LDC2001S13
SWBCell, Pt2 D. Graff, K. Walker, and D. Miller, "Switchboard Cellular Part 2 Audio," 2004 catalog.ldc.upenn.edu/LDC2004S07
Fisher Eng Train, Pt1 Speech C. Cieri, D. Graff, O. Kimball, D. Miller, and K. Walker, "Fisher English Training Speech Part 1 Speech," 2004
C. Cieri, D. Miller, and K. Walker, "The Fisher corpus: A resource for the next generations of speech-to-text," inProc. LREC, Lisbon, Portugal, May2004, pp. 69–71
catalog.ldc.upenn.edu/LDC2004S13
proceedings/lrec2004
Fisher Eng Train, Pt1 Transcripts C. Cieri, D. Graff, O. Kimball, D. Miller, and K. Walker, "Fisher English Training Speech Part 1 Transcripts," 2004 catalog.ldc.upenn.edu/LDC2004T19
Fisher Eng Train, Pt2 Speech C. Cieri, D. Graff, O. Kimball, D. Miller, and K. Walker,"Fisher English Training Speech Part 2 Speech," 2004 catalog.ldc.upenn.edu/LDC2005S13
Fisher Eng Train, Pt2 Transcripts C. Cieri, D. Graff, O. Kimball, D. Miller, and K. Walker, "Fisher English Training Speech Part 2 Transcripts," 2004 catalog.ldc.upenn.edu/LDC2005T19
CallMyNet K. Jones, S. Strassel, K. Walker, D. Graff, and J. Wright, "Call my net corpus: A multilingual corpus for evaluation of speaker recognition technology," inProc. INTERSPEECH, Stockholm, Sweden, August 2017, pp.2621–2624 10.21437/Interspeech.2017-1521
MLS/MLS14 K. Jones, D. Graff, J. Wright, K. Walker, and S. Strassel, "Multi-language speech collection for NIST LRE," inProc. LREC, Portoroz, Slovenia, May 2016, pp. 4253–4258 jones-etal-2016-multi
Mixer (pt. 1) C. Cieri, J. P. Campbell, H. Nakasone, D. Miller, and K. Walker, "The Mixer corpus of multilingual, multichannel speaker recognition data," inProc. LREC, Lisbon, Portugal, May 2004 cieri-etal-2004-mixer
Mixer (pt. 2) C. Cieri, L. Corson, D. Graff, and K. Walker, "Resources for new research directions in speaker recognition: The Mixer 3, 4 and 5 corpora," inProc. INTERSPEECH, Antwerp, Belgium, August 2007 10.21437/Interspeech.2007-340
Mixer (pt. 3) L. Brandschain, D. Graff, C. Cieri, K. Walker, C. Caruso, and A. Neely, "The Mixer 6 corpus: Resources for cross-channel and text independent speaker recognition," inProc. LREC, Valletta, Malta, May 2010, pp. 2441–2444 lrec2010/792
VAST J. Tracey and S. Strassel, "VAST: A corpus of video annotation for speech technologies," inProc. LREC, Miyazaki, Japan, May 2018, pp. 4318–4321 tracey-strassel-2018-vast
SRE16 test set S. O. Sadjadi, C. Greenberg, T. Kheyrkhah, K. Jones, K. Walker, S. Strassel, and D. Graff, "2016 NIST Speaker Recognition Evaluation Test Set," 2019 catalog.ldc.upenn.edu/LDC2019S20
SRE21 dev/test set Sadjadi, Seyed Omid, Craig Greenberg, Elliot Singer, Lisa Mason, and Douglas Reynolds. "The 2021 NIST Speaker Recognition Evaluation." (LDC2021E10)," arxiv.org/abs/2204.10242
Janus multimedia dataset G. Sell, K. Duh, D. Snyder, D. Etter and D. Garcia-Romero, "Audio-Visual Person Recognition in Multimedia Data From the Iarpa Janus Program," 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018, pp. 3031-3035 (LDC2019E55) 10.1109/ICASSP.2018.8462122
CTS Superset S. O. Sadjadi, D. Graff, and K. Walker, "NIST SRE CTS Superset LDC2021E08," Web Download. Philadelphia: Linguistic Data Consortium, 2021

S. O. Sadjadi, "NIST SRE CTS Superset: A large-scale dataset for telephony speaker recognition,"arXiv preprint arXiv:2108.07118, 2021


10.48550/arXiv.2108.07118
WeCanTalk K. Jones, K. Walker, C. Caruso, J. Wright, and S. Strassel, "WeCanTalk: A new multi-language, multi-modal resource for speaker recognition," Proceedings of the 13th Conference on Language Resources and Evaluation (LREC 2022), pages 3451–3456 lrec2022-we-can-talk
Signing Up

In order to participate an user account is required. Signing up for an account is an easy two step process using email-confirmation explained here. The help center additionally shows how to reset a lost password or unlock the account.

Access to the Evaluation Dataset

After creating an account and signing into the participation dashboard, please follow the registration workflow on the left side in order to obtain access to the data.

  1. In a first step create a Site, which represents your point of contact. Detailed instructions.
  2. In the next step obtain and sign both: the evaluation and dataset license agreement. The evaluation agreement is a checkbox while the dataset license agreement is a PDF document which needs to be downloaded, filled out, scanned, uploaded and will be validated by the LRE license liaison. Detailed instructions..
  3. After the licensing access has been established the dataset section on the bottom right of the dashboard will be pointing to a download page.
Register for Track Participation

In the next workflow step please select which track to participate in LRE

How To Submit System Output

System output submission to the leaderboard must be made through the web-platform using the submission instructions described on the webpage (Submission Management). To prepare your submission, you will first make .tar file of your system output TSV file via the UNIX command ‘tar cvzf [submission-name].tgz [submission-file-name].tsv’ and then make your submission as follows:

  1. Navigate to your “Dashboard”
  2. Under “Submission Management”, click your task
  3. Add a new “System” or use an existing system
  4. Click on “Upload”
  5. Fill in the form and click “Submit”
How To Validate

The LRE-Scorer package (to be public soon) contains an output format checker that validates the submission. To validate your system output locally please use the following command-line:

  • coming soon...
Rules

All audio segments must be processed independently of each other within a given task, meaning content extracted from the segment data must not affect the processing of another segment.

While participants may report their own results, participants may not make advertising claims about their standing in the evaluation, regardless of rank, or winning the evaluation, or claim NIST endorsement of their system(s). The following language in the U.S. Code of Federal Regulations (15 C.F.R. § 200.113)14 shall be respected: NIST does not approve, recommend, or endorse any proprietary product or proprietary material. No reference shall be made to NIST, or to reports or results furnished by NIST in any advertising or sales promotion which would indicate or imply that NIST approves, recommends, or endorses any proprietary product or proprietary material, or which has as its purpose an intent to cause directly or indirectly the advertised product to be used or purchased because of NIST test reports or results.

At the conclusion of the evaluation, NIST may generate a report summarizing the system results for conditions of interest. Participants may publish or otherwise disseminate these charts, unaltered and with appropriate reference to their source.

LRE22 Leaderboard

Updated: 2023-03-02 10:15:40 -0500
rank team_name submission_id submission_type actCprimary minCprimary report system_description
1 Vocapia-TalTech 32 primary 0.11174 0.11116 /scoring_runs/32|Link /system_descriptions/14/download|Link
3 Polito-Kore 17 primary 0.18736 0.18642 /scoring_runs/17|Link /system_descriptions/6/download|Link
4 ABC 60 primary 0.19277 0.19122 /scoring_runs/60|Link /system_descriptions/19/download|Link
6 JHU-MIT 45 primary 0.22443 0.22094 /scoring_runs/45|Link /system_descriptions/10/download|Link
12 SAR 50 primary 0.30929 0.27772 /scoring_runs/50|Link /system_descriptions/15/download|Link
13 SRI 30 primary 0.3263 0.31989 /scoring_runs/30|Link /system_descriptions/7/download|Link
15 XMUSPEECH 37 primary 0.33083 0.30699 /scoring_runs/37|Link /system_descriptions/17/download|Link
18 XJU-NTU 44 primary 0.46077 0.43598 /scoring_runs/44|Link
21 ABSP Lab - IIT Kharagpur 29 primary 0.48165 0.47408 /scoring_runs/29|Link /system_descriptions/2/download|Link
26 IDIAP 52 primary 0.50401 0.50281 /scoring_runs/52|Link /system_descriptions/16/download|Link
28 leap 63 primary 0.55084 0.53977 /scoring_runs/63|Link /system_descriptions/9/download|Link
30 AuroraLab 64 primary 0.62069 0.54323 /scoring_runs/64|Link /system_descriptions/26/download|Link
31 GVIS_ULE 43 primary 0.62364 0.56552 /scoring_runs/43|Link /system_descriptions/8/download|Link
36 AST 38 primary 0.63687 0.6318 /scoring_runs/38|Link /system_descriptions/21/download|Link
38 NIST-LRE 66 primary 0.72236 0.60848 /scoring_runs/66|Link
39 SUKI 31 primary 0.72725 0.72206 /scoring_runs/31|Link /system_descriptions/13/download|Link
41 IITMandi SpeechGroup Team 16 primary 0.86215 0.83091 /scoring_runs/16|Link