Rss

Audio latency < frame rate

So here’s what I’m working on at the moment, from an article for mystery client who I hope to reveal soon:

really-low-latency

Yep, if latency is buffer duration + inherent hardware latency, then this app needs just 29 ms to get the last sample in its buffer out the headphones or speaker. And really not that hard once you reacquaint your mind with old fashioned C.

A world of difference from the awful time I had on Swing Hacks trying to use, explain, and justify javax.media.sound.sampled, which has a push metaphor, requiring you shove samples into an opaque buffer whose size is unknown and unknowable. To say nothing of Java Sound’s no-op’ed DataLine.getLevel(). Analogy: Core Audio is The Who. Java Sound is The Archies.

Next up, working through the errata on iPhone SDK Development, now that we have a first full draft in tech review.

Comments (3)

  1. hello,
    I should to use Core Audio to produce a signal from the speaker at a fixed frequency. I’ve tried to include CoreAudio framework but it does not contains the CoreAudio.h headers but only the CoreAudioTypes.h. Can you point me to the right direction in order to include the coreaudio framework.

  2. Have you added AudioToolbox.framework to the project?

  3. […] Units – Low-level C-based API. Very low latency, as little as 29 milliseconds. Mixing, effects, near-direct access to input and output […]

Leave a Reply

Your email address will not be published. Required fields are marked *