It's winter and I'm writing a boring little program to serve as a drum machine in a pinch. I'd like the option to offset any sample by a given number of semitones. I'm doing that by changing the rate at which I read through the PCM file's array.
Of course anything other than a full octave jump upwards results in a non-integer iterator value, meaning that a single index of the PCM can't be referenced. The quick and easy solution is to round the iterator to the nearest integer, which indeed produces a clean signal that certainly sounds higher/lower, but I'm told this is inaccurate.
When I try linear interpolation, supposedly more pitch accurate than simple rounding, the resulting waveform is recognizable but horribly noisy--all sorts of audio distortions that the rounding method didn't have. This is the code I'm using:
(wav file is single-channel 16-bit at 48000Hz, stored internally as an unsigned short array. each interpolated sample is split into two 8-bit unsigned chars before being written to the DSP.)
Code: Select all
unsigned short linter(int n, double k, unsigned short *s) {
return round(((1 - k) * s[n]) + (k * s[n + 1]));
}It's an implementation of the linear interpolation formula described here:
http://www.electronics.dit.ie/staff/tsc ... etable.htm
__
Should audible noisiness be expected with a simple linear interpolation, or have I most likely erred somewhere along the way? Is the rounding method generally considered "good enough" for one-shot samples? How inaccurate is it really? How did early samplers handle this?


