Body:
I built a polyphonic synthesizer in Python with a tkinter GUI styled after the Moog Subsequent 37.
Features: 3 oscillators, Moog ladder filter (24dB/oct), dual ADSR envelopes, LFO, glide, noise generator, 4 multitimbral channels, 19 presets, rotary
knob GUI, virtual keyboard with mouse + QWERTY input, and MIDI support.
No external GUI frameworks — just tkinter, numpy, and sounddevice.
It's great that it works, and it may well work 99% of the time. And it may have been a great learning experience/platform, so congrats for that.
But it's important for people to understand why this is generally the wrong toolset for this sort of software development, even when it can be so much fun.
Python and other interpreted languages (Lua excepted, with conditions), and languages like Swift that have GC, cannot ensure non-blocking behavior in the code that need to runs in realtime. You can paper over this with very large audio buffers (which makes the synth feel sluggish) or with crossed fingers (which work a surprising amount of the time). But ultimately you need a language like C/C++/Rust etc. to ensure that your realtime DSP code is actually realtime.
Despite Apple pushing Swift "for everything", even they still acknowledge that you should not write AudioUnit (or any other plugin formats) using Swift.
Meanwhile, have fun with this, which it looks like you already did!
reply