![]() The choices on offer are normally powers of two: a typical audio interface might offer settings of 32, 64, 128, 256, 512, 10 samples. There are several different factors that contribute to latency, but the buffer size is usually the most significant, and it’s often the only one that the user has any control over.īuffer sizes are usually configured as a number of samples, although a few interfaces instead offer time-based settings in milliseconds. The time lag between playing a note and hearing the resulting sound through headphones is highly off-putting to musicians if it’s long enough to become audible, so this needs to be kept as low as possible without using up too many of the computer’s processing cycles. When latency creeps above a few milliseconds, it quickly becomes audible and can badly affect performers.Ī delay between sound being captured and its being heard again at the other end of the recording system is called latency, and it’s one of the most important issues in computer recording. In some situations this isn’t a problem, but in many cases, it definitely is! Where musicians are hearing their own and each others’ performances through the recording system, it’s vital that the delay never becomes long enough to be audible. The down side is that the larger we make these buffers, the longer the whole process takes and once we get beyond a certain point, the recorded sound emerging from the computer starts audibly to lag behind the source sound we’re recording. The larger we make these buffers, the better the system’s ability to deal with the unexpected, and the less of the computer’s processing time is spent making sure the flow of samples is uninterrupted. The buffer acts as a safety net: even if something momentarily breaks up the stream of data coming into the buffer, it’s still capable of outputting the continuous uninterrupted sequence of samples we need. This process is called buffering, and it makes the system more resilient in the face of unexpected interruptions. instead, the computer waits until a few tens or hundreds of samples have been received before starting to process them and the same happens on the way out. To make the system more robust, we don’t record and play back each sample as soon as it arrives. Even the slightest delay in sending just one out of the millions of samples in an audio recording would cause a dropout. In practice, however, this makes the recording system too sensitive to interruptions. In a perfect world, each sample that emerges from the analogue-to-digital converter would be sent to the computer, stored and passed back to the digital-to-analogue converter immediately. This is quite a complex sequence of events, and it suffers from a built-in tension between speed and reliability. Only then, assuming we’re monitoring what we’re recording, do we get to hear it. Recording software running on the computer then writes this data to memory and to disk, processes it, and eventually spits it out again so that it can be turned back into an analogue signal by, you guessed it, a digital-to-analogue converter. This sequence of numbers is packaged in the appropriate format and sent over an electrical link to the computer. A device called an analogue-to-digital converter then measures or ’samples’ this fluctuating voltage at regular intervals - 44,100 times per second, in the case of CD-quality audio - and reports these measurements as a series of numbers. This is called an ‘analogue’ signal, because the the variations in electrical potential are analogous to the pressure fluctuations that make up the sound. A microphone measures pressure changes in the air and outputs an electrical signal with corresponding voltage changes. ![]() Let’s consider what happens when we record sound to a computer. The biggest of these issues is latency: the delay between a sound being captured and its being heard through our headphones or monitors. There are challenges that have to be overcome in order for all this to be possible, and issues arising that were never a problem when we recorded to tape. ![]() Yet it’s important to remember that computers are not built specifically for recording. They let us apply EQ, compression and effects to more channels than would be possible in any analogue studio. They allow us to manipulate audio in ways the engineers of 30 years ago could only dream of. They can work with more audio and MIDI tracks than we’re ever likely to need. Modern computers are fantastic recording devices. There’s More To An Audio Interface Than The I/O. System Science - Part 2: Drivers & Latency
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |