Part 2: Render songs as audio
Your goal: create code to render your song model to an AudioBuffer so you can hear it.
This is where the abstract description of music in the Song class becomes actual audio.
New method in Song
/**
* Renders all the notes in the piece to a normalized audio buffer.
*/
public AudioBuffer renderAudio() {
...
}
Sketch of the method:
-
Create a new AudioBuffer.
Compute the size of the buffer using the song’s duration. Pay attention to units!
Hint about units
There is a method in
Songthat gives you the duration in seconds. ButAudioBufferneeds you to specify a duration in samples.There is already a method that converts seconds to samples.
More of a hint
The method that converts seconds to samples is in the
Utilsclass.Just show me how to use it!
Utils.convertSecondsToSamples(getDuration()) -
For each note in the song:
- Create a signal for the note by converting its pitch to a wavelength.
(Use the class diagram you drew to help you.)
Hint about unit conversion
There is another method in
Utilsthat will help you.Hint about creating the signal
Look at your class diagram. How can you get from
NotetoSignal?More of a hint about creating the signal
A
Notehas aWaveform. AWaveformcan create aSignalif you specify the wavelength. - Tell your audio buffer to mix the signal into the buffer at the correct time and for the correct duration.
- Create a signal for the note by converting its pitch to a wavelength.
(Use the class diagram you drew to help you.)
-
Call
normalize()on your audio buffer. This prevents clipping in the mixed audio.Aside: What is “clipping?”
Digital audio playback always has a limited range of allowed amplitudes. In
AudioBuffer, this range is -1…1. Audio outside this range gets “clipped:” the top of the wave gets flattened out so that the wave stays within the allowed bounds. The resulting audio is loud and distorted.It is OK if intermediate calculations go outside the allowed range, but to avoid clipping, software must ensure that the final result stays within range. That is what calling
normalize()will do for you. -
Return the audio buffer.
Add this test to SongTest:
@Test
void renderAudio() {
double sampleLen = 1.0 / AudioBuffer.SAMPLE_RATE;
song.addNote(new Note(waveform0, 81, 2 * sampleLen, 4 * sampleLen));
song.addNote(new Note(waveform1, 33, 4 * sampleLen, 5 * sampleLen));
AudioBuffer audio = song.renderAudio();
assertArrayEquals(
new float[] { 0, 0, 0.25f, 0.25f, -0.5f, 1, -0.75f, 0.75f, -0.75f },
audio.getSamples(),
0.0001f);
}
Does that work? Try uncommenting the commented-out code in IntegrationTest, which
checks all of the pieces working together.
Let’s hear some music!
Did those tests pass? Run the main method in AudioSynth! You should hear a nice E♭ flat major triad.
If that works, then comment out the simple 3-note song, and uncomment the code below that loads a song from a data file. You can use both kondo.csv and bach.csv for testing.
Enjoy the tunes! Then, when you’re ready, proceed to the exciting next step.
Next: Visualization