Stem Separation for Mixing, Remixing, and Music Production
Frequently Asked Questions
The SDK powers a broad range of real-time and on-device applications built on isolated audio: music remixing and mashup tools, karaoke and vocal isolation features, interactive fan engagement experiences, music education and practice apps, mobile songwriting tools, and broadcast or streaming workflows requiring clean separated audio.
The model supports separation into individual stems including vocals, lead vocals, backing vocals, drums, bass, guitar (acoustic and electric), piano, keys, strings, wind instruments, and more. See the Stem Separation page for the complete list of available stems and model options.
AI instrument stem separation is the process of using machine learning to isolate individual musical components from a fully mixed audio recording. AudioShake's models can extract individual stems — including vocals, drums, bass, guitar, piano, strings, and more — directly from any mix, without requiring the original multi-track session. The result is a set of isolated audio elements that can be used for remixing, creative tools, education, and interactive experiences.
Yes. AudioShake's instrument stems are used in Dolby Atmos and spatial audio workflows where individual stems need to be positioned independently in the mix. AudioShake has been used to prepare stems for Dolby Atmos mixes from finished masters where original session files were unavailable — a common challenge with catalog and archival recordings.
AudioShake provides clean, separated stems from fully mixed recordings — including legacy and mastered tracks without original session files — giving mixing and mastering engineers individual audio components to work with. Stems are used for spatial audio mixing, Dolby Atmos preparation, remastering, remixing, and content versioning.








%201.avif)











.avif)















