356-winter-2024/hw2

From CCRMA Wiki
Revision as of 18:50, 22 January 2024 by Ge (Talk | contribs) (Created page with "= Programming Project #2: "Featured Artist" = [https://ccrma.stanford.edu/courses/356-winter-2024/ Music and AI (Music356/CS470)] | Winter 2024 | Prof. Ge Wang <div style="te...")

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Programming Project #2: "Featured Artist"

Music and AI (Music356/CS470) | Winter 2024 | Prof. Ge Wang

Mosaiconastick.jpg

In this programming project, we will learn to work with audio features for both supervised and unsupervised tasks. These include a real-time genre-classifier and a feature-based audio mosaic tool. Using the latter, create a feature-based musical statement or performance!

Due Dates

  • Milestone (Phase One complete + Phase Two prototype): webpage due Monday (1/29, 11:59pm) | in-class critique Tuesday (1/30)
  • Final Deliverable: webpage due Monday (2/5, 11:59pm)
  • In-class Presentation: Tuesday (2/6)

Discord is Friend

  • direct any questions, rumination, outputs/interesting mistakes to our class Discord

Things to Think With

Tools to Play With

  • get the latest ChucK release (1.5.2.0 or higher)
    • all platforms for this project, you will be using the command line version of chuck.
  • sample code for all phases (including optional video starter code)

GTZAN Dataset

  • next, you'll need to download the GTZAN dataset
    • 1000 30-second music clips, labeled by humans into ten genre categories

Phase One: Extract, Classify, Validate

  • understanding audio, audio features, FFT, feature extraction
  • extract different sets of audio features from GTZAN dataset
  • run real-time classifier using different feature sets
  • run cross-validation to evaluate the quality of classifier based different features
  • you can find relevant code here
    • start playing with these, and reading through these to get a sense of what the code is doing
    • example-centroid.ck -- a basic example of using ChucK's unit analyzer framework (things connected using the upchuck operator =^) to extract an audio feature:
      • generate an input (a 440hz sine wave) -- this can be any audio source, e.g., adc for the microphone
      • take a Fast Fourier Transform (FFT) on a frame of audio (size is determined by the FFT size)
      • using the output of FFT's analysis to compute the Spectral Centroid for that frame of audio
      • note how the ChucK timing is used to precisely control how often to do a frame of analysis
      • the .upchuck() is used to trigger an analysis, automatically cascading up the =^
    • example-mfcc.ck -- this is like the previous example, but now we compute a multi-dimensional feature, Mel Frequency Cepstral Coefficients (MFCCs)
    • feature-extract.ck -- in a "real-world" scenario, we would extract multiple features. a FeatureCollector is used to aggregate multiple features into a single vector (see comments in the file for more details)
    • genre-classify.ck -- using output of feature-extract.ck, do real-time classification by performing the same feature extraction and using k-NN to predict likelihood of each genre category (see comments in the file for more details)
    • x-validate.ck -- using output of feature-extract.ck, do cross-validation to get a sense of the classifier quality
  • experiment by choosing different features and different number of features, extracting them on GTZAN, try the real-time classifier, and perform cross-validation
    • available features: Centroid, Flux, RMS, RollOff, ZeroX, MFCCs, Chroma, Kurtosis
    • try at least five different feature configurations and evaluate the resulting classifier using cross-validation
      • keep in mind that the baseline score is .1 (a random classifier for 10 genre), and 1 is max
      • how do different--and different numbers of--features affect the classification results?
      • in your experiment, what configuration yielded the highest score in cross-validation?
  • briefly report on your experiments

Phase Two: Designing an Audio Mosaic Tool

  • you can find phase 2 sample code here
  • using what you've learned, build a database mapping sound frames (100::ms to 1::second) <=> feature vectors
    • curate your own set of audio files can be mixture of
      • songs or song snippets; we will perform feature extraction on audio windows from beginning to end; in essence each audio window is a short sound fragment with its own feature vector)
      • (optional) short sound effects (~1 second), you may wish to extract a single vector per sound effect
    • use/modify/adapt the feature-extract.ck code from Phase One to build your database of sound frames to feature vectors:
      • instead of generating one feature vector for the entire file, output a trajectory of audio windows and associated feature vectors
      • instead of outputting labels (e.g., "blues", "disco", etc.), output information to identify each audio window (e.g., filename and windowStartTime)
      • see reference implementation mosaic-extract.ck
    • note this does not require any labels, and like word2vec, we want to situate each sound window in a N-dimension feature space
  • play with mosaic-similar.ck: a feature-based sound explorer to query your database and perform similarity retrieval (using KNN2)
  • using your database and retrieval tool and concatenative synthesis and the mosaic-synth-mic.ck and mosaic-synth-doh.ck, design an interactive audio mosaic generator
    • feature-based
    • real-time
    • takes any audio input (mic or any unit generator)
    • can be used for expressive audio mosaic creation
  • there are many functionalities you can choose to incorporate into your mosaic synthesizer
    • using a keyboard or mouse control to affect mosaic parameters: synthesis window length, pitch shift (through SndBuf.rate), selecting subsets of sounds to use, etc.
    • a key to making this expressive is to try different sound sources; play with them A LOT, gain understanding of the code and experiment!
  • (optional) do this in the audiovisual domain
    • (idea) build a audiovisual mosaic instrument or music creation tool / toy
    • (idea) build a GUI for exploring sounds by similarity; will need to reduce dimensions (using PCA or another technique) to 3 or 2 in order to visualize

Phase Three: Make a Musical Mosaic!

    • use your prototype from Phase Two to create a feature-based musical mosaic in the form of a musical statement or performance
    • (optional) do this in the audiovisual domain

Reflections

  • write ~300 words of reflection on your project. It can be about your process, or the product. What were the limitations (and how did you try to get around them?)

Milestone Deliverables

submit a webpage for the project so far, containing:

  • a brief report of what you did / try / observed in Phase One, and a brief description of your experiments on in Phase Two so far
  • a demo video (doesn't have to be polished) briefly documenting your experiments/adventures in Phase Two, and a very preliminary sketch of Phase Three (a creative statement or performance using your system)
  • code and feature and usage instructions needed to run your system
  • list and acknowledge the source material (audio and any video) and people who have helped you along the way; source audio/video do not need to be posted (can submit these privately in Canvas)
  • In class, we will view your webpage/demo video and give one another feedback for this milestone.

Final Deliverables

  • create a CCRMA webpage for this etude
  • your webpage is to include
    • a title and description of your project (free free to link to this wiki page)
    • all relevant chuck code from all three phases
      • phase 1: all code used (extraction, classification, validation)
      • phase 2: your mosaic generator
      • phase 3: code used for your musical statement
    • video recording of your musical statement (please start early!)
    • your 300-word reflection
    • any acknowledgements (people, code, or other things that helped you through this)
  • submit to Canvas only your webpage URL