Edge AI is finding new uses every day – from fully autonomous robots to edge servers for data analysis. Low power consumption is essential for such systems, enabling long runtimes in challenging and remote environments. Edge AI is thus perfectly suited for applications where supporting infrastructure is limited. Many interesting projects and solutions fall under this category – and one of them, showcased on NVIDIA’s Technical Blog as their Project of the Month is Bird@Edge: an audio-based AI system for identifying birds based on sound only. Developed by a team of researchers from the University of Marburg in Germany, it uses portable devices for audio recording and a NVIDIA Jetson Nano Developer Kit.
Bird@Edge Hardware
The Bird@Edge system comprises of three main components: the Bird@Edge Mics, ESP32-based boards which capture and stream audio wirelessly to a local Bird@Edge Station, which processes this data. Recognition data from multiple stations is streamed to the Bird@Edge Server backend for further analysis, which uses Grafana for presenting data. The small size of Bird@Edge Mics enables them to be easily hidden and allows for tracking of elusive species which avoid people and bulky equipment.
Wireless audio transmission is enabled by a local WiFi network, while transmission to the backend is enabled by an LTE connection.
Bird@Edge Mics all utilise an ESP32 board along with a Knowles SPH0645LM4H MEMS microphone and an inexpensive battery pack, while the Bird@Edge Station utilises the Jetson Nano Developer Kit with a Realtek RTL8812BU-based USB WiFi adapter, a Huawei E3372H LTE modem and solar power management circuitry.
Jetson Nano was chosen for its ability to run complex ML models while utilising very little power. The team was able to create a custom energy profile for the station which lowered power consumption from 4.86 W to just 3.16 W with 5 mics attached, enabling weeks of runtime. Using the solar panel attached to the station, it can run continuously. The team found that the power consumption of the station does not change much, even when the number of microphones connected to it increases.
Bird@Edge Software
To recognize and identify the bird species based on the sound recordings, the team developed a deep neural network (DNN) based on the EfficientNet-B3 architecture and trained using TensorFlow. The model was then optimized using NVIDIA TensorRT and deployed with the NVIDIA DeepStream SDK.
This enabled the team to build their AI-powered software which supports multiple audio input streams and recognition of sounds of one of the 82 bird species found on the University of Marburg’s campus.
Each Jetson-based station is tasked with discovering and streaming data from the microphones, as well as running data through the AI pipeline and relaying data to the Birds@Edge server.
The team stated in their research paper that their model “outperforms the state-of-the-art BirdNET neural network on several datasets and achieves a recognition quality of up to 95.2% mean average precision on soundscape recordings.”
Project Goals
The team is presenting the system as a viable method of tracking regional bird species biodiversity. Ornithologists would traditionally spend months painstakingly collecting, transcribing and analysing data to get any results. Bird@Edge allows them to get and instant overview of an ecosystem’s health. Additionally, there are plans to upscale the project to a commercial product. According to the team’s Höchst: “It is a challenge to go from a hand-assembled prototype that performs well in a controlled environment to a product that can be operated in large quantities without regular maintenance.”
One step toward this goal was already taken, providing an open web service accessible to users. “On the one hand, this enables users to use existing audio recorders they already have in stock and upload their files to our web service in the cloud,” Höchst explained. “On the other hand, users receive direct feedback and can, for example, view the spectra of the uploaded files, manually verify the results, or report misclassifications in order to improve the underlying machine learning model,” adds Höchst.
Further reading
For more details about Bird@Edge, visit the NVIDIA Developer Forum Bird@Edge: Bird Species Recognition at the Edge. All software and hardware components used in the Bird@Edge project are open source and available through BirdEdge on GitHub. The research paper can be found linked above in the article. Some data for this text was also taken from NVIDIA’s article on this topic.
- Raspberry Pi AI Camera review: Even more approachable AI - 10/22/2024
- Arturia AstroLab review - 09/29/2024
- Raspberry Pi AI Kit review - 08/16/2024