Semi-random thoughts and tales of tinkering
Up to this point, we have built and tested everything in the iOS Simulator. That was fine for layout, animation timing, and verifying that our FFT code does not crash. But the simulator has a hard limitation that matters for us: it cannot access the microphone. For an audio app, you need real hardware. This section walks you through every step of getting the spectrum analyzer running on a physical iPhone.
There are two reasons, and both are non-negotiable for audio work.
First: the microphone. The iOS Simulator has no microphone input. When you install a tap on the audio engine's input node in the simulator, you get silence — zeroes in every buffer. Your spectrum bars will sit flat at the bottom and your VU meter will not move. You can verify that the code compiles and the UI lays out correctly, but you cannot verify the thing that matters most: that it actually responds to sound.
Second: performance. Even on Apple Silicon Macs, the simulator runs your app through a translation layer. The CPU architecture matches (ARM on both sides), but the simulator adds overhead for graphics rendering, thread scheduling, and system framework emulation. For a UI-only app, you would never notice. For an app that processes 4096 audio samples through an FFT dozens of times per second and redraws 48 animated bars in real time, the difference can be meaningful. If your app feels smooth on the simulator, it will feel smooth on device. But if it stutters on the simulator, do not assume it will stutter on hardware — test before you optimize.
In the .NET world, you typically run your app directly on the same machine you develop on. There is no separate "deploy to device" step for desktop apps. iOS development always has this extra step — your code runs on a different device with a different OS. Think of it like deploying to a remote server, except the server is in your pocket.
Getting your iPhone connected to Xcode is straightforward, but there are a few first-time steps.
⌘, then Accounts). Click the + button at the bottom left and
sign in with your Apple ID. Any Apple ID works — you do not need a paid developer
account for this.After the initial USB setup, you can deploy wirelessly. In Xcode, go to Window → Devices and Simulators, select your device, and check "Connect via network." Both Mac and iPhone must be on the same Wi-Fi network. Wireless deployment is slower, but saves you from hunting for a cable every time you want to test.
Starting with iOS 16, Apple added a security gate called Developer Mode. It must be enabled on the device before Xcode can install apps on it. This is a one-time setup.
If you do not see the Developer Mode option, connect your iPhone to Xcode first — the option sometimes only appears after Xcode has communicated with the device at least once.
Developer Mode is a security measure. It prevents malicious software from being sideloaded onto a stolen device. For end users who never develop apps, it stays off and they never see it. For us, it is a one-time toggle. Once enabled, it stays on across iOS updates.
Apple requires every app running on a device to be cryptographically signed. This is how iOS verifies that an app comes from a known source and has not been tampered with. If you have used Authenticode signing on Windows, the concept is similar — but on iOS, it is mandatory for all apps, not just those you distribute.
Here is what you need to do in Xcode:
Xcode handles the rest: it generates a signing certificate, creates a provisioning profile that ties your Apple ID to your device's unique identifier, and embeds it in the app bundle at build time.
With a free Apple ID, apps you deploy to your device expire after 7 days. After that, the app will not launch — you just re-deploy from Xcode. With a paid Apple Developer account ($99/year), apps last a full year. For learning and personal projects, the free tier is perfectly adequate. You only need the paid account if you want to publish to the App Store or use TestFlight.
We set up the NSMicrophoneUsageDescription key in Info.plist back in
Section 3. Now, when you run the app on a real
device and tap Start, here is what happens:
engine.start(), which triggers the audio engine to access the
input node (the microphone).If the user taps Allow, audio data flows immediately. The spectrum bars light up. If the user taps Don't Allow, the audio engine still runs, but every buffer contains silence. The app does not crash — it just has nothing to analyze.
Our tutorial code handles denial passively (the bars stay flat). A production app should detect the denial and show an explanation, perhaps with a button that opens the app's Settings page. We will discuss that in Section 9.
During development, you will sometimes want to re-test the permission dialog. You cannot trigger it again once the user has responded. To reset: go to Settings → VUMeter → Microphone on the device and toggle the permission off, then back on. Or delete the app and reinstall it.
Running audio code on a real iPhone surfaces issues that the simulator hides. Here are the common gotchas and how to handle each one.
The simulator typically reports a sample rate of 44,100 Hz. A real iPhone often uses
48,000 Hz. This is a real bug we encountered: the SpectrumAnalyzer
defaults to 44,100 Hz, and if you don't override it with the actual device rate, every frequency
calculation comes out ~8% too low. That's almost a full semitone — play an A and the app
says G#.
The fix is in AudioEngine.start(): we query the real rate from the hardware and
pass it to the analyzer at creation time:
let format = input.outputFormat(forBus: 0)
// format.sampleRate is 48000.0 on most iPhones, not 44100
analyzer = SpectrumAnalyzer(binCount: 48, sampleRate: format.sampleRate)
The lesson: never hardcode sample rates. Audio hardware varies across devices and can even
change at runtime (see Bluetooth, below). Always query format.sampleRate and
propagate it to any code that converts between bin indices and frequencies.
If AirPods (or other Bluetooth headphones) are connected when you start the audio engine, two things can happen:
For our spectrum analyzer, the simplest fix during development is to disconnect Bluetooth
headphones before testing. For a production app, you would configure the AVAudioSession
category and mode to control routing behavior.
The physical mute switch on the side of the iPhone (or the Action button on iPhone 15 Pro and later) controls audio output only. It silences ringtones, notifications, and media playback. It does not affect microphone input. Your spectrum analyzer works identically whether the phone is in silent mode or not. This confuses some users — they expect "silent" to mean "no audio at all" — but for a recording or analysis app, it is the correct behavior.
When the app is running on a connected device, Xcode's debug console works exactly as it does
with the simulator. Any print() statements in your code appear in real time at the
bottom of the Xcode window.
For audio debugging, a useful technique is to temporarily print values from the tap callback:
input.installTap(onBus: 0, bufferSize: 4096, format: format) { [weak self] buffer, _ in
guard let self else { return }
guard let channelData = buffer.floatChannelData?[0] else { return }
// Debug: print the first sample and buffer size
print("Sample rate: \(format.sampleRate), frames: \(buffer.frameLength), first: \(channelData[0])")
// ... rest of processing
}
This will flood the console (the tap fires dozens of times per second), but it confirms that audio data is actually flowing and shows you the real sample rate. Remove or comment out the print statements when you are done — printing from the audio thread adds overhead.
If the console output scrolls too fast to read, add a frame counter and print every 100th
callback: if frameCount % 100 == 0 { print(...) }. At 48 kHz with 4096-sample
buffers, the tap fires roughly 12 times per second, so every 100th call prints about once
every 8 seconds.
Your app works, but it has the default white icon on the home screen. Let's fix that. Apple provides a free symbol library called SF Symbols that contains over 5,000 vector icons. We can use one as a starting point for our app icon.
waveform — a classic audio waveform shapespeaker.wave.3.fill — a speaker with sound waveswaveform.path.ecg — an ECG-style waveformchart.bar.fill — bar chart (resembles a spectrum)Assets.xcassets in the Project
Navigator. Click AppIcon. Drag your 1024×1024 PNG into the single icon
well.Since Xcode 15, you only need the one 1024×1024 image. Xcode automatically generates every required size (180×180 for iPhone home screen, 120×120 for Spotlight, 60×60 for Settings, and so on). In older versions of Xcode, you had to provide each size individually — that tedium is gone.
iOS app icons must be square with no transparency (the system applies the rounded corners automatically). Do not bake in rounded corners yourself — they will not align with the system mask and will look wrong. Keep the design simple. At 60×60 pixels on the home screen, fine details disappear.
Build and run again (⌘R). Your app now has a proper icon on the device's
home screen. It is a small thing, but it makes the project feel real — this is not a
tutorial exercise anymore, it is an app on your phone.