Difference between revisions of "220a-spring-2021/hw2"
(→Part 1: Crafting a Sound) |
(→Part 1: Crafting a Sound) |
||
Line 19: | Line 19: | ||
* 1b. Create a “playNote()” function to encapsulate the function of playing a note | * 1b. Create a “playNote()” function to encapsulate the function of playing a note | ||
− | ** | + | ** Adapt the sample code below, into a [http://chuck.stanford.edu/doc/language/func.html ChucK function] that takes 3 or more input arguments that control the sound created in 1a; parameters should include the oscillator frequency, the amplitude (gain, related to loudness), and the duration of the note; free feel to further modify this function to your liking (e.g., do you also want to control the filter cutoff frequency with each note?). |
** Play 4 different kinds of sounds by calling this function with different inputs. | ** Play 4 different kinds of sounds by calling this function with different inputs. | ||
<nowiki>// play a note (assumes "osc" and "e" are globals) | <nowiki>// play a note (assumes "osc" and "e" are globals) |
Revision as of 06:22, 17 April 2021
Contents
Homework #2: Block-Rockin' Synths
Due Date
- milestone: 2021.4.21 (in-class) Wednesday
- final deliverables due: 2021.4.26, 11:59pm, Monday
- in-class listening: 2021.4.28, Wednesday
Part 1: Crafting a Sound
- 1a. Create a pitched sound using an oscillator, a filter, and an envelope
- Oscillator: choose among TriOsc, SawOsc, or SqrOsc. (As discussed in class, the SinOsc is not amenable for filtering because sine waves have no overtones; the filter cannot change the "timbre" of the SinOsc.)
- Filter: hook up your oscillator to a LPF (low pass filter) ugen and experiment with setting the filter's cutoff frequency (using the .freq parameter) (again, a LPF will have a more noticeable effect on signals rich in frequencies (e.g., SqrOsc), compared to, say, a sine wave).
- Next, use an ADSR to envelope the signal coming out of the LPF. (see adsr.ck)
(1a-voice.ck) Turn in ChucK code that plays the shorter sound followed by the longer sound.
- 1b. Create a “playNote()” function to encapsulate the function of playing a note
- Adapt the sample code below, into a ChucK function that takes 3 or more input arguments that control the sound created in 1a; parameters should include the oscillator frequency, the amplitude (gain, related to loudness), and the duration of the note; free feel to further modify this function to your liking (e.g., do you also want to control the filter cutoff frequency with each note?).
- Play 4 different kinds of sounds by calling this function with different inputs.
// play a note (assumes "osc" and "e" are globals) fun void playNote( float pitch, float amp, dur T ) { // set freq (osc is your oscillator) pitch => Std.mtof => osc.freq; // set amplitude amp => osc.gain; // open env (e is your envelope) e.keyOn(); // A through end of S T-e.releaseTime() => now; // close env e.keyOff(); // release e.releaseTime() => now; }
(1b-play.ck) Turn in chuck code including the definition of your function and a section that repeatedly calls your function.
- 1c. Make it polyphonic
- Convert your single oscillator into an array of 4 oscillators
- Consider using a for-loop to connect the oscillators to the rest of the signal path (including filters, envelopes, and dac)
- In the same oscillator initialization loop, set each oscillator to a different frequency of a chord of your choosing
(1c-chord.ck) Turn in chuck code that uses all the above to play a single chord of your choosing.
By the way, here is some starter code for experimenting with chords:
// chord root in MIDI note number; 60 is Middle C 60 => int root; // array of intervals relative to the root; this is a major seventh chord [0,4,7,11] @=> int chord[]; // print out the MIDI note numbers and frequencies for the chord for( int i; i < chord.size(); i++ ) { // print MIDI note, frequency <<< root+chord[i], Std.mtof(root+chord[i]) >>>; }
- 1d. Design a new function "playChord()" -- like 1b but to play an entire chord instead of single note.
- How would your playChord() look? What parameters should it accept? (e.g., root of the chord and intervals?)
- However you design the function's interface, the function should set the respective frequencies on your oscillators, and sound the chord.
(1d-chords.ck) Use the above function to play a sequence of four different chords of your choosing. Additionally, vary at least one other musical element from chord to chord (e.g., relative loudness, duration).
- 1e. Make 2 or 3 changes to further fine-tune your instrument to your liking. Possible modifications include:
- Change the oscillator type to something you haven’t used before
- Modulate the filter cutoff independently (by spork ~ a concurrent function), perhaps sweeping it using Math.sin()
- Re-tune the chord(s) using floating point (rather than integer) MIDI note numbers
- Add reverb (NRev, PCRev, JRev), a delay w/ feedback, or another effect
- Or something else!
(1e-more.ck) A more-to-your-liking version of 1d-chords.ck incorporating the fine-tunings you've made (use your original chord sequence or feel free to change it up)
- 1f. (1f-chord-stmt.ck) Craft a mini musical statement (30-45 seconds) by calling your control function multiple times across time, with different input parameters. Feel free to experiment; how might you make it sound more "musical" to your ears?
Part 2: Sporks, Shreds, and a Sea of Sound
In the previous Part, we constructed and then controlled sounds with several oscillators in the main thread of ChucK. Alternatively, we could write functions that create new UGens "on-the-fly" to play dynamically when sporked; these functions can be called repeated to layer the sounds in order to create polyphony (a mixture of several simultaneous voices).
- 2a. Write a function makeSound() that makes a sound, encapsulating all the UGens and variables it would need
This function can use any combination of oscillators, filters, and envelopes; you'll need to develop a clear idea of what UGens you need to create "locally" inside the function for each sound -- and what UGens are "globally" shared. Call this function (using spork) from your main while loop. Note: this could be as simple as wrapping the code you wrote in part 1a in its own function.
Feel free to start from the code and modify:
// globally shared ugens NRev reverb => dac; .1 => reverb.mix; // function fun void sound() { // ugens "local" to the function TriOsc s; // connect to "global" ugens s => reverb; // randomize frequency Math.random2f(30,1000) => s.freq; // randomize duration Math.random2f(50,150)::ms => now; } while( true ) { // spork a new concurrent shred spork ~ makeSound(); // advance time 300::ms => now; }
- 2b. Parameterize the sound() function so we can control it! Include:
- oscillator pitch
- oscillator amplitude / note velocity
- filter cutoff
- envelope parameters
for example (your parameters can vary):
fun void sound( float pitch, float vel, float cutoff, dur attack, dur decay, float sustain, dur release ) { // set the parameters // make sound happen // FYI: no infinite loops in this function; we will be calling this function repeately }
Call this new function (using spork) from your main while loop.
- 2c. Now, spork the function several times (back to back without passing time) from the main while loop, something like:
spork ~ sound( 60, .5, 500, 50::ms, 50::ms, .5, 100::ms ); spork ~ sound( 64, .5, 500, 50::ms, 50::ms, .5, 100::ms ); spork ~ sound( 67, .5, 500, 50::ms, 50::ms, .5, 100::ms );
Experiment with different input parameters, considering the sound as whole; try to craft a few different sounds.
- 2d. From your main loop, create a texture by sporking a series of sound() shreds across time that partially overlap. What kind of sounds work well when layered? Can you control the density by varying either the time between spork ~ sound() and the parameters to each sound() call -- or both? You might even consider having a global float variable named "density" that controls the density of sound at any given moment -- you could even modulate the "density" on yet another control shred. Can you controllably go from a sparse texture to a super dense "sea of sound"?
- 2e. Play with your functions and parameters to create one sparse (sparse.wav) texture and one (dense.wav) dense texture, each lasting for 10-15 seconds and then going to silence in a musically graceful way (e.g., finishing each note currently sounding). Record a wav file of each and comment on the different strategies you used to create them.
Part 3: Make a Statement
Create a musical statement (60-90 seconds) that calls functions from part 1 and 2. Include at least one long and one short sound from part 1, as well as a moment of sparsity and density as explored in part 2. You can think of this as a sequencer or generative music tool.
You may find the following creative prompts helpful:
- How can you transition between sparsity and density? Would you like it to be abrupt (perhaps to create contrast) or gradual, smooth, or imperceptible?
- How do textures created with short sounds differ from those with long sounds? How about layering textures of differing density?
- How can you vary filter parameters over time to give your sounds/textures a different feel?
- Think of rhythm and timing - to what degree are the time intervals between your sounds regular and predictable? Does this vary over time or for different kinds of sounds? Are some sounds “structural” and others “decorative”?
- Think of form - does your statement have a beginning, middle, and/or end? Is there a “story” or an idea that develops? Could it work with half the duration? What makes the listener want to know or be able to anticipate what comes next?
Milestone
- For this milestone, we are primarily interested in a work-in-progress version of your musical statement (Part 3)—and that will be all you are expected to have on your website at this point. However, feel free to include anything on your webpage from Parts 1 and 2 if that's helpful to talk about your explorations and thinking for this milestone.
- Please be prepared to share your work-in-progress and offer feedback to others in class on Wednesday (4/21)
Final Homework Deliverables
turn in all files by putting them in your 220a CCRMA webpage and submit ONLY your webpage URL to Canvas
Your webpage should include:
- 1) your hw2 should live at https://ccrma.stanford.edu/~YOURID/220a/hw2
- 2) ChucK (.ck) files, as applicable, for Parts 1 through 5
- 3) sound (.wav) files, as applicable, for Parts 1 through 5
- 4) comments and reflections as you work through the homework
- 5) notes/title for you mursical statement (Part 5)
- 6) submit ONLY your webpage URL to Canvas