blog
ATCO2 at interspeech 2021
—
by admin
—
last modified
Aug 20, 2021 10:14 AM
INTERSPEECH is an annual conference that includes papers on all the scientific and technological aspects of Speech. More than 1,700 participants from all over the world attend the conference annually to present their work in oral and poster sessions. Additionally, Interspeech is one of the biggest speech-related conferences worldwide with an acceptance rate below 50%. Due to covid-related issues, the conference will be held both, online and on-site.
Satellite Workshop – Automatic Speech Recognition in Air Traffic Management (ASR-ATM)
—
by Petr Motlicek
—
last modified
Aug 23, 2021 03:19 PM
ATCO2 project will be presented on the satellite workshop at Interspeech 2021: The purpose of this satellite workshop is to bring together Speech Recognition and Air Traffic Management (ATM), i.e. all the experts available at the Interspeech and the ATM World. Different presentations are especially addressing the topic of speech recognition in ATM, asking questions and putting challenges for both sides on the table. Date/Time: 30th of August @ 14:00 CEST Costs: Free of charge
Special session on Interspeech 2021 conference
—
by Petr Motlicek
—
last modified
Aug 09, 2021 03:56 PM
ATCO2 project (in association with other related HAAWAII project supported by EC) has succeeded with its proposition to organise a special session at Interspeech 2021 conference (30.8 - 3.9.2021). Interspeech is the lead international conference organised yearly, focused primarily on research and applicability of the technologies on speech/audio. The conference has a high ranking, with acceptance of the accepted papers below 50%.
Contextual adaptation for improving call sign recognition
—
by admin
—
last modified
Jul 19, 2021 01:47 PM
Contextual adaptation is a technique of “suggesting” small snippets of text that are likely to appear in the speech recognition output. The snippets of text are derived from the current “situation” of the speaker, in our project ATCO this is location and time.
ATC recording using SDR - deeper analysis - comparing HW setups
—
by Igor Szoke
—
last modified
Jul 08, 2021 11:35 AM
ATC recording using SDR - deeper analysis - raw signal processing and SNR estimation
—
by Igor Szoke
—
last modified
Jul 08, 2021 11:37 AM
This blog post is more technical. We describe our raw signal processing pipeline here. The rtl-airband software is set to produce raw data coming from the SDR hardware in cs16 format.
How to setup your SDR for clean ATC audio
—
by Igor Szoke
—
last modified
Jun 08, 2021 09:12 AM
Basic terminology and hardware setup description for ATC listening
—
by Igor Szoke
—
last modified
May 26, 2021 06:02 PM
What is the best SDR hardware choice for ATC
—
by Igor Szoke
—
last modified
Jun 07, 2021 05:58 PM
Where to place your antenna for ATC recordings
—
by Igor Szoke
—
last modified
Jun 07, 2021 06:03 PM
End to End Callsign Recognition System
—
by Petr Motlicek
—
last modified
May 12, 2021 03:45 PM
BUT partner ranked high for their work in the field of automatic speech recognition
—
by Petr Motlicek
—
last modified
May 12, 2021 03:45 PM
Faculty of Information Technology at Brno University of Technology is among the world leaders in this field of automatic speech recognition.
Improving callsign recognition by incorporating information from the radar
—
by Petr Motlicek
—
last modified
Mar 29, 2021 02:14 PM
When Air Traffic Controllers (ATCs) talk to pilots they identify the plane the pilot is flying with a callsign. These usually consist of one term for the airline and then a sequence of alphanumeric characters, for example "Speedbird Seven Alpha Five". When doing speech recognition for ATCs recognizing these is particularly important.
Processing speech recordings: some data protection issues by Romagna Tech
—
by admin
—
last modified
Feb 08, 2021 03:10 PM
When Air Traffic Control enthusiasts record conversations, they may be unaware of what speech is in terms of data protection: it can be regarded as biometric data, similarly to a fingerprint.
Setting Up VHF receiver for air-traffic communication
—
by Petr Motlicek
—
last modified
Nov 30, 2020 01:36 PM
Well, as people who follow ATCO2 project know, that it is about converting communication between pilot and air traffic controller (or controller in short) from voice to text.
Air Traffic Control Conversations Collection – A legal introduction by ELDA
—
by Petr Motlicek
—
last modified
Nov 30, 2020 12:00 PM
ELRA, the European Language Resources Association, and its distribution agency, ELDA, have been funded in 1995 and have been a world-wide leading player in distributing Language Resources and providing other services to the speech language communities. In the course of the ATCO2 project, ELDA provides legal expertise for the collection of Air Traffic Control Conversations and Data Management in the project.
Is it possible to have a cross-accent speech recognizer that works for different airports?
—
by Petr Motlicek
—
last modified
Nov 30, 2020 12:01 PM
One of the main concerns in automatic speech recognition (ASR) for air-traffic communications is the presence of several non-English accents in the English communications between pilots and air traffic controllers (ATCO).
Bringing together what belongs together: Matching voice commands and radar data
—
by Petr Motlicek
—
last modified
Nov 30, 2020 12:01 PM
The whole ATCO² Project relies on two input streams. One input stream is represented by the voice command issued by air-traffic controler and the other input stream is provided by the automatic dependent surveillance-broadcast data. An independent collection of both streams makes a matching of the data inevitable. The following lines will give an insight in how the matching process is done.
Automatic speech recognition, how it works?
—
by Petr Motlicek
—
last modified
Nov 30, 2020 12:01 PM
[Updated, 3.4.2020]: ATCO2 project is closely aligned with the development of automatic speech recognition engines for Air-Traffic Controllers (ATCOs), particularly to automatically transcribe their communication with the pilots. This blogpost is giving some insight into the process of Automatic Speech Recognition, current trends, and some details on how it will be integrated in the ATCO2 project. We are describing Hybrid HMM-based speech recognizer, which is the current state-of-the-art speech recognition paradigm. The literature also suggests end-to-end systems. However, we did not consider using these, due to practical reasons. We use the toolkit Kaldi [1], both for training the baseline models, and processing the untranscribed data.
The annotation started
—
by Petr Motlicek
—
last modified
Nov 30, 2020 12:02 PM
[Updated 1. 2. 2020]: The work on ATCO2 project started during Christmas. We kicked-off one of the important phases, which is a data collection and transcription. Why do we need to start with this? Well, this will be explained in larger context below!
Kick-off meeting
—
by admin
—
last modified
Dec 03, 2019 03:56 PM
The kick-off meeting took place at Brno University of Technology on November 22, 2019.
Document Actions