Difference between revisions of "Q3osc: overview"

From CCRMA Wiki
Jump to: navigation, search
m
m
Line 2: Line 2:
 
'''q3osc''' is a heavily modified version of the ioquake3 gaming engine featuring an integrated [http://www.audiomulch.com/~rossb/code/oscpack/ oscpack] implementation of Open Sound Control for bi-directional communication between a game server and a multi-channel ChucK audio server. By leveraging ioquake3’s robust physics engine and multiplayer network code with oscpack’s fully-featured OSC specification, game clients and previously unintelligent in-game weapon projectiles can be repurposed as behavior-driven independent OSC-emitting virtual sound-sources spatialized within a multi-channel audio environment for real-time networked performance.  
 
'''q3osc''' is a heavily modified version of the ioquake3 gaming engine featuring an integrated [http://www.audiomulch.com/~rossb/code/oscpack/ oscpack] implementation of Open Sound Control for bi-directional communication between a game server and a multi-channel ChucK audio server. By leveraging ioquake3’s robust physics engine and multiplayer network code with oscpack’s fully-featured OSC specification, game clients and previously unintelligent in-game weapon projectiles can be repurposed as behavior-driven independent OSC-emitting virtual sound-sources spatialized within a multi-channel audio environment for real-time networked performance.  
  
q3osc is an update of the manner in which the quake3 gaming engine can be used to export player locations and entity movements and actions outside of the q3 server via OSC.  While q3osc is working from a fresh [http://ioquake3.org/ ioquake3] codebase, the inspiration came from Julian Oliver's excellent [http://julianoliver.com/q3apd  Q3APD] project, which unfortunately makes use of the string-based [http://en.wikipedia.org/wiki/FUDI FUDI] protocol instead of a more flexible proper [http://www.cnmat.berkeley.edu/OpenSoundControl/ OSC] protocol.  
+
Within the virtual environment, performers can fire different colored projectiles at any surface in the rendered 3D world to produce various musical sounds. As each projectile contacts the environment, the bounce location is then used to spatialize sound across the multi-speaker sound field in the real-world listening environment. Currently in use with the Stanford Laptop Orchestra or [http://slork.stanford.edu SLOrk], q3osc is designed to be scaled to work with a distributed environment of multiple computers and multiple hemspherical speaker arrays.
  
After using Q3APD for the 8-channel work [http://ccrma.stanford.edu/~rob/220c maps & legends], it became apparent that while the mod was great, the idea could be improved and further explored, especially in terms of using OSC instead of FUDI, additional player gestures and data-points being exported from quake3 to an external audio engine. Since Q3APD used the string-based FUDI UDP implementation, rather than a full-blown standards-based OSC implementation, only PD could reasonably be used as the recipient of Q3APD outgoing data-streams. Since there are other excellent languages to be used, osc is a better choice.
+
q3osc allows composers to program interactive performance environments using a combination of traditional gaming level-development tools and interactive audio processing softwares like ChucK, Supercollider, Pure Data or Max/MSP.
 
+
With q3osc, the goal is to use a fully-featured OSC implementation like [http://www.audiomulch.com/~rossb/code/oscpack/ oscpack] to not only recreate the basic user-coordinate tracking from Q3APD, but to also expand the scope of usable in-game parameters to include missle objects and other actionable items and events in the game world. Using OSC, we can implement audio engines built in any osc-capable audio software, such as [http://chuck.cs.princeton.edu/ ChucK], [http://cycling74.com Max/MSP], [http://www.audiosynth.com/ SuperCollider] or [http://crca.ucsd.edu/~msp/software.html PD].
+
 
+
By adding behavioral controls to in-game entities like plasma and bfg-bolts (both of which have interesting visual attributes), visual in-game behaviors like bouncing and attraction/homing both to self and other in-game entities, we can create audio gestures which tightly follow the visual gestures.
+
  
 
<div style="vertical-align: top; float:right; margin:10px 0px 20px 30px;">http://ccrma.stanford.edu/~rob/q3osc/images/quintet_caps_400.png</div>
 
<div style="vertical-align: top; float:right; margin:10px 0px 20px 30px;">http://ccrma.stanford.edu/~rob/q3osc/images/quintet_caps_400.png</div>

Revision as of 08:54, 30 May 2008

shot0041a.jpg

q3osc is a heavily modified version of the ioquake3 gaming engine featuring an integrated oscpack implementation of Open Sound Control for bi-directional communication between a game server and a multi-channel ChucK audio server. By leveraging ioquake3’s robust physics engine and multiplayer network code with oscpack’s fully-featured OSC specification, game clients and previously unintelligent in-game weapon projectiles can be repurposed as behavior-driven independent OSC-emitting virtual sound-sources spatialized within a multi-channel audio environment for real-time networked performance.

Within the virtual environment, performers can fire different colored projectiles at any surface in the rendered 3D world to produce various musical sounds. As each projectile contacts the environment, the bounce location is then used to spatialize sound across the multi-speaker sound field in the real-world listening environment. Currently in use with the Stanford Laptop Orchestra or SLOrk, q3osc is designed to be scaled to work with a distributed environment of multiple computers and multiple hemspherical speaker arrays.

q3osc allows composers to program interactive performance environments using a combination of traditional gaming level-development tools and interactive audio processing softwares like ChucK, Supercollider, Pure Data or Max/MSP.

quintet_caps_400.png
Links

http://ccrma.stanford.edu/~rob