It's winter and I'm writing a boring little program to serve as a drum machine in a pinch. I'd like the option to offset any sample by a given number of semitones. I'm doing that by changing the rate at which I read through the PCM file's array.

Of course anything other than a full octave jump upwards results in a non-integer iterator value, meaning that a single index of the PCM can't be referenced. The quick and easy solution is to round the iterator to the nearest integer, which indeed produces a clean signal that certainly sounds higher/lower, but I'm told this is inaccurate.

When I try linear interpolation, supposedly more pitch accurate than simple rounding, the resulting waveform is recognizable but horribly noisy--all sorts of audio distortions that the rounding method didn't have. This is the code I'm using:

(wav file is single-channel 16-bit at 48000Hz, stored internally as an unsigned short array. each interpolated sample is split into two 8-bit unsigned chars before being written to the DSP.)

- Code: Select all
`unsigned short linter(int n, double k, unsigned short *s) {`

return round(((1 - k) * s[n]) + (k * s[n + 1]));

}

where N is the integer section of the iterator, K is its fractional remainder, and S is the sample array. If the next desired sample existed at S[27.75], then N would be 27 and K would be 0.75. The return value would be a weighted average of S[27] and S[28] in favor of S[28].

It's an implementation of the linear interpolation formula described here:

http://www.electronics.dit.ie/staff/tsc ... etable.htm

__

Should audible noisiness be expected with a simple linear interpolation, or have I most likely erred somewhere along the way? Is the rounding method generally considered "good enough" for one-shot samples? How inaccurate is it really? How did early samplers handle this?