A composer is a person who writes music. The term refers particularly to someone who writes music in some type of musical notation; thus, allowing others to perform the music. This distinguishes the composer from a musician who improvises or plays an instrument.
The composer responsabilities are to make the game experience much more appealing through in game music. For that purpose he can either select songs or compose new songs that fit the game play and make a richer experience.
8-bit machines and chip music
At the time video games began to blossom as a form of entertainment in the 1970s, music was stored on physical medium in analog waveforms like compact cassettes and phonograph records. Such components were expensive and prone to breakage under heavy use making them less than ideal for use in an arcade cabinet, though in rare cases, they were used (Journey). A more affordable method of having music in a video game was to use digital means, where a specific computer chip would change electrical impulses from computer code into analog sound waves on the fly for output on a speaker. Sound effects for the games were also generated in this fashion.
While this allowed for inclusion of music of arcade games in the 1970s, it was usually monophonic, looped, or used sparingly between stages or at the start of a new game, such as Pac Man, or Pole Position. The decision to include any music into a video game meant that at some point it would have to be transcribed into computer code by a programmer, whether or not the programmer had musical experience. Some music was original, some was public domain music such as folk songs. The popular Atari 2600 home system, for example, was capable of generating only two tones, or “notes,” at a time.
This approach in game development carried on into the 1980s. As advances in silicon and cost of technology fell it ushered in a definitive new generation of arcade machines and home consoles. In arcades, machines based on the Motorola 68000 CPU and Yamaha YM chips for sound generators allowed for several more tones or ‘channels’ of sound, sometimes 8 or more. Home console systems also had a comparable upgrade in sound .
Approach to game music development in this time period usually involved using simple tone generation and/or frequency modulation synthesis to simulate instruments for melodies, and use of a ‘noise channel’ for simulating percussive noises. Early use of PCM samples in this era was limited to sound bites (Monopoly), or as an alternate for percussion sounds (Super Mario Bros 3). The music on home consoles often had to share the available channels with other sound effects. For example, if a laser beam was fired by a spaceship, and the laser used a 1400 Hz tone, then whichever channel was in use by music would stop playing music and start playing the sound effect.
The oncoming generation of Arcade, home consoles, and home computers would reshape the approach to music in video games.
Early digital synthesis and sampling
The first home computer to make use of digital signal processing in the form of sampling was the Commodore Amiga in 1985. The computer’s sound chip initially featured four independent 8-bit wide digital-to-analog converters. Instead of simply generating a waveform that sounded like a simplistic “beep”, such as FM synthesis, this allowed short samples of pre-recorded sound waves to be played back through the computer’s sound chip from memory. It allowed a developer to take a ‘sample’ of a real instrument or sound they wanted at a significantly higher quality and fidelity than was previously available or would come to be available on home computing for several years. This was an early development example of what would later be called wavetables and soundfonts. For its role in being first and affordable, the Amiga would remain a staple tool of early sequenced music composing, especially in Europe.
IBM PC clones in 1985 would not see any significant development in multimedia abilities for a few more years, and sampling would not become popular in other video game systems for several years. Though sampling had the potential to produce much more realistic sounds, each sample required much more data in memory. This was at a time when all memory, solid state (cartridge), magnetic (floppy disk) or otherwise was still very costly per kilobyte. Sequenced soundchip generated music on the other hand was generated with a few lines of comparatively simple code and took up far less precious memory.
As cost of magnetic memory declined in the form of diskettes, the evolution of video game music on the Amiga (and game music development in general) shifted to sampling in some form. It took some years before Amiga game designers learned to wholly utilize digitized sound effects in music (an early exception case was the title music of text adventure game The Pawn, 1986). Also, by this time computer and game music had already begun to form its own identity, and thus many music makers intentionally tried to produce music that sounded like that heard on the Commodore 64, which resulted in the chiptune genre.
Similar to the Amiga, the approach to sound and music developments in arcades began to appear in certain specialized arcade system board revisions. In 1991, games like Street Fighter II on the CPS-1 used voice samples extensively along with sound effects and percussion.
The evolution also carried into home console video games, most notably with the release of the Super Famicom in 1990, and SNES in 1991. This home console system sported a specialized custom Sony chip for both the sound generation and for special hardware DSP. It was capable of 8 channels of sampled sounds at up to 16-bit resolution, possessed an impressive selection of DSP effects including a type of ADSR seen usually in high end synthesizers of the time period, and full stereo sound. This allowed experimentation with applied acoustics in video games, such as musical acoustics (early games like Castlevania IV, F-Zero, Final Fantasy IV, Gradius III, and later games Chrono Trigger), directional (Star Fox) and spatial acoustics (Dolby Pro-Logic was used in some games, like King Arthur’s World and Jurassic Park), as well as environmental and architectural acoustics (Zelda III, Secret of Evermore). Many games also made heavy use of the high quality sample playback capabilities (Super Star Wars, Tales of Phantasia). The only real limitation to this powerful setup was the still costly solid state memory.
The more dominant approach for games based on CDs, however, was shifting toward streaming audio.
Pre-recorded and streaming music
Taking entirely pre-recorded music had many advantages over sequencing for sound quality. You could use whatever number of instruments you wanted of whatever type in studio production and simply record one track to be played back during the game. Quality was only limited by the effort put into mastering the track itself. Memory space costs that was previously a concern was somewhat addressed with optical media becoming the dominant media for software games. CD quality audio allowed for music and voice that had the potential to be truly indistinguishable from any other source or genre of music.
In the same timeframe of late 1980s to mid 1990s, the sampling approach had skipped over PC games. Early PC gaming was limited to a 1-bit PC speaker, leftover legacy from an IBM clone’s standard and was poor for generating complex sounds. Expansion cards allowed for FM synthesis, such as the AdLib sound card. MIDI sequencing was used by the game developers to drive the FM synthesis (Doom). A typical PC lacked the specialised computing power to deal with sampling play back, or a way to output it. Rather than the game developer do their own sampling, wavetable sequencing became a popular alternative. A wavetable with samples pre-made and conforming to General MIDI would be installed on a sound card either by design, or by addition of a daughter board. Quality of these wavetable samples had the tendency to range wildly from on manufacturer to the next, but Roland’s product were used as a standard until the release of Creative’s Sound Blaster in 1989. The Sound Blaster represented an affordable catch-all solution for PC users to have access to sound features. It included a joystick port, midi support using AdLib FM synthesis compatibility, a standardised port for daughter cards for their own Wave Blaster and for other companies’ products, and 8-bit 22.05 kHz (later 44.1 kHz) digital audio recording and playback of a single stereo channel. This still did not result in wide use of sampling for PC games because of the inability to play more than one sample at a time. Sequenced music would continue on PCs as the most commonly found game music until mid 90s as CD-ROMs became a more common feature of PCs and game software, as well as a general increase in storage capacity. This gave developers the memory space to use streaming for their soundtracks.
Simple of digitized music was experimented with earlier home computers and arcade machines – famous example being Teenage Mutant Ninja Turtles arcade game, that plays the title music of cartoon. In fourth generation home video games and PCs this was limited to playing a Red Book audio track from a CD while the game was in play (Sonic CD). However, there were several disadvantages of regular CD-audio. Optical drive technology was still limited in spindle speed, so playing an audio track from the game CD meant that the system couldn’t access data again until it stopped the track from playing. Looping, the most common form of game music, was also problem as when the laser reached the end of a track, it had to move itself back to the beginning to start reading again causing an audible gap in playback.
To address these drawbacks, some PC game developers designed their own container formats in house, for each application in some cases, to stream compressed audio. This would cut back on memory used for music on the CD, allowed for much lower latency and seek time when finding and starting to play music, and also allowed for much smoother looping due to being able to buffer the data. A minor drawback was that use of compressed audio meant it had to be decompressed which put load on the CPU of a system. As computing power increased, this load became minimal, and in some cases dedicated chips in a computer (such as a sound card) would actually handle all the decompressing.
Fifth generation home console systems also developed specialised streaming formats and containers for compressed audio playback. Sony would call theirs Yellow Book, and offer the standard to other companies. Games would take full advantage of this ability, sometimes with highly praised results (Castlevania: Symphony of the Night). Games ported from arcade machines, which continued to use FM synthesis, often saw superior pre-recorded music streams on their home console counterparts (Street Fighter Alpha 2). Even though the game systems were capable of “CD quality” sound, these compressed audio tracks were not true “CD quality.” Many of them had lower sampling rates, but not so significant that most consumers would notice. Some games continued to use full redbook CD audio for their soundtracks (the Wipeout series) and could even be played in a standard CD player.
This overall freedom offered to music composers gave video game music the equal footing with other popular music it had lacked. A musician could now, with no need to learn about programming or the game architecture itself, independently produce the music to their satisfaction. This flexibility would be exercised as popular mainstream musicians would be using their talents for video games specifically. An early example would be Way of the Warrior on the 3DO, with music by White Zombie. A more well-known example would be Trent Reznor’s score for Quake.
An alternate approach, as with the TMNT arcade, was to take pre-existing music not written exclusively for the game and use it in the game. The game Star Wars: X-Wing vs. TIE Fighter and subsequent Star Wars games, took music composed by John Williams for the Star Wars movies of the 1970s and 1980s, and used it for the game soundtracks.
Both using new music streams made specifically for the game, and using previously released/recorded music streams are common approaches for developing sound tracks to this day. It is common for X-games sports based video games to come with some popular artists recent releases (SSX, Tony Hawk, Initial D), as well as any game with heavy cultural demographic theme that has tie-in to music (Need For Speed: Underground, Grand Theft Auto). Sometimes a hybrid of the two are used, such as in Dance Dance Revolution.
Sequencing samples continue to be used in modern gaming for many applications, mostly RPGs. Sometimes a cross between sequencing samples, and streaming music is used. Games such as Republic: The Revolution (music composed by James Hannigan) and Command & Conquer: Generals (music composed by William Brown) have utilised sophisticated systems governing the flow of incidental music by stringing together short phrases based on the action on screen and the player’s most recent choices. Other games would dynamically mix the sound on the game based on cues of the game environment. In a recent game, if your snowboarder in SSX took to the air after jumping from a ramp, the music would soften or even muffle a bit, and the ambient noise of wind and air blowing would become louder to emphasize the sensation of being airborne. When you land, the music would resume regular playback until its next ‘cue’. The LucasArts company pioneered this interactive music technique with their iMUSE system, used in their early adventure games and the Star Wars flight simulators Star Wars: X-Wing and Star Wars: TIE Fighter. Action games such as these will change dynamically to match the amount of danger. Stealth-based games will sometimes rely on such music.
Being able to play one’s own music during a game in the past usually meant turning down the game and using an alternate music player. Some early exceptions were possible on PC/Windows gaming where you could independently turn down the in-game music and then have a music playing program running in the background.
Some PlayStation games supported this by swapping the game CD with a music CD, although when the game needed data, you had to swap the CDs again. One of the earliest games, Ridge Racer, was loaded entirely into RAM, letting the player insert a music CD to provide a soundtrack throughout all of the gameplay. In Vib Ribbon, this became a gameplay feature, with the game generating levels based entirely on the music on whatever CD the player inserted.
It wasn’t until the Xbox in the sixth generation of home consoles came and its ability to copy music from a CD onto its internal hardrive allowed gamers to utilise their own music more seamlessly with gameplay than ever before. The feature, called Custom Soundtrack, had to be enabled by the game developer. This feature carried over into the seventh generation with the Xbox 360.
The Wii is also able to play custom soundtracks if it is enabled by the game (Excite Truck).
The PlayStation Portable can, in games like Need for Speed Carbon: Own the City also let the player play their own music from a Memory Stick.
Current application and future developments
The Xbox 360 wields Dolby Digital support, sampling and playback rate of 16-bit @ 48 kHz, hardware codec streaming, and potential of 256 audio simultaneous channels. While powerful and flexible, none of these features represent any major change in how game music is made from the last generation of console systems. PCs continue to rely on 3rd party devices for in game sound reproduction, and Soundblaster despite being largely the only major player in the entertainment audio expansion card business continues to advance its product development at a significant pace.
Future technology, while also powerful, does not represent any fundamental shift in video game music creation either. The PlayStation 3 will handle multiple types of surround sound technology, including Dolby TrueHD, and DTS-HD. Nintendo’s Wii console shares many audio components with the Nintendo GameCube from the previous generation, including Dolby Pro Logic II. These features are extensions of technology already currently in use.
The game developer of today has many choices on how to develop music. More likely, changes in video game music creation will have very little to do with technology and more to do with other factors of game development as a business whole. As sales of video game music separate from the game itself became marketable in the west (compared to Japan where game music CDs had been selling for years), business elements also wield a level of influence that it had little before. Music from outside the game developer’s immediate employment, such as music composers and pop artists, have been contracted to produce game music just as they would for a theatrical movie. Many other factors have growing influence, such as editing for content, politics on some level of the development, executive input and other elements.
Game music as a genre
Many of the games made for the Nintendo Entertainment System and other early game systems featured a similar style of music which may come closest to being described as the “video game genre” in terms of musical composition, as opposed to simply “video game music” for being in a video game or being played on a video game console. Some compositional features of this genre continue to influence certain music today, though, game soundtracks currently tend to emulate movie soundtracks more-so than this classic genre. This genre’s compositional elements may have developed due to technological restraints. The genre might also have been influenced by technopop bands such as Yellow Magic Orchestra, which were quite popular during the period in which videogame music took its trademark sound. Features of this genre include:
Songs almost always have main sections or “verse sections” consisting of chord progressions of four or more chords (similar to much of J-Pop and 1980s Western Pop), as opposed to the two chord progressions found in most Western Pop verses. The “chorus” of the songs also often contain four or more different chords in their chord progressions. Often many songs feature a chord progression which is extremely popular in J-Pop, which (in the key of C) could be given as: F minor, C minor, G major, C minor, with C major quickly inserted before the series repeats again. Overall, there would be generally a higher number of sections of a song than a comparable pop song, as this helps to reduce the repetitive aspect of the music, which was generally played as a continuous loop. This also sets it apart from even J-Pop music or most other forms of popular music.
Songs feature a heavy amount of synchronization between instruments, in a way that would be difficult for a human to play. For example, although the tones featured in NES music can be thought of emulating a traditional four piece rock band (triangle wave used as a bass, two pulse waves analogous to two guitars, and an affected white noise channel used for drums), and although video game music was influenced by rock or pop music at the time, composers would often go out of their way to compose complex and rapid sequences of notes. That has been compared to music composition during the baroque period, where it is believed that composers compensated for instruments such as the harpsichord (which do not allow for musical expression based on the volume of the sound) by focusing more on musical embellishments. Composers were also limited in terms of polyphony, or the number of notes that can be played at once. Only three notes can be played at once on the Nintendo Entertainment System. A great deal of effort was put into creating the illusion that more notes are playing. As of the late 1990s, musical groups covering these melodies have sprung up. One such group is The Minibosses, who attempt to emulate these melodies as closely as possible using real instruments. Another such group is The NESkimos, who opt to explore these songs artistically, and create entirely new songs out of them. See also MegaDriver.
The bassline of a large percentage of tunes during the 8-bit period consisted of notes played in the rhythm of a quarter note followed immediately by two eighth notes on most beats. The particular note played would often be the root of the chord.
Video game music outside of video games
Appreciation for video game music, particularly music from the Third & Fourth generation of home video game console and sometimes newer generations, continues today in very strong representation in both fans and composers alike, even absent the presence of a Video Game. Melodies and themes from 20 years ago continue to be re-used in newer generations of video games. Themes from the original Metroid by Hirokazu Tanaka can still be heard in Metroid games from today as arranged by Kenji Yamamoto.
Video game music soundtracks were sold separately on CD in Japan well before the practice spread to other countries. Interpretive albums, re-mixes, and live performances were also common variations to original soundtracks (abbreviated OST). Koichi Sugiyama was an early figure in this practice sub-genres, and following the release of the Dragon Quest game in 1986, a live performance CD of his compositions was released and performed by the London Philharmonic Orchestra (then later by other groups including the Tokyo Philharmonic Orchestra, and NHK Symphony). Yuzo Koshiro, another early figure, released a live performance of the Actraiser soundtrack. Both Koshiro’s and fellow Falcom composer Mieko Ishikawa’s contributions to Ys music would have such long lasting impact that there were more albums released of Ys music than of almost all other game-type music.
Like anime soundtracks, these soundtracks and even sheet music books were usually marketed exclusively in Japan. Therefore, interested non-Japanese gamers have to import the soundtracks and/or sheet music books through on or offline firms specifically dedicated to video game soundtrack imports. This has been somewhat less of an issue more recently as domestic publishers of anime and video games have been producing western equivalent versions of the OSTs for sale in UK and US, but only for the most popular titles in most cases.
Other original composers of the lasting themes from this time have gone on to manage symphonic concert performances to the public exhibiting their work in the games. Koichi Sugiyama was once again the first in this practice in 1987 with his “Family Classic Concert” and has continued concert performances almost annually. In 1991 he also formed a series called Orchestral Game Concerts, notable for featuring other talented game composers such as Yoko Kanno (Nobunaga’s Ambition, Romance of the 3 Kingdoms, Uncharted Waters), Nobuo Uematsu (Final Fantasy), Keiichi Suzuki (Mother /Earthbound), and Kentaro Haneda (Wizardry).
Global popularity of video game music would begin to surge with Squaresoft’s success, particularly Chrono Trigger, Final Fantasy VI, and Final Fantasy VII. Compositions by Nobuo Uematsu with Final Fantasy 4 were arranged into Final Fantasy IV: Celtic Moon, a live performance by string musicians with strong celtic influence recorded in Ireland. The Love Theme from the same game has been used as an instructional piece of music in Japanese schools.
On August 20, 2003 for the first time outside of Japan, music written for video games from all over the world ranging from Final Fantasy to The Legend of Zelda was performed by a live orchestra, the Czech National Symphony Orchestra in a Symphonic Game Music Concert in Germany. This event was held as the official opening ceremony of Europe’s biggest trading fair for video games, the GC Games Convention and repeated in 2004, 2005 and 2006.
On November 17, 2003, Square Enix launched the Final Fantasy Radio on America Online. The radio station has initially featured complete tracks from Final Fantasy XI and Final Fantasy XI: Rise of Zilart and samplings from Final Fantasy VII through Final Fantasy X.
The first officially sanctioned Final Fantasy concert in the United States was performed by the Los Angeles Philharmonic Orchestra at Walt Disney Concert Hall in Los Angeles, California, on May 10, 2004. All seats at the concert were sold out in a single day. “Dear Friends: Music from Final Fantasy” followed & was performed at various cities across the United States.
On July 6, 2005, the Los Angeles Philharmonic Orchestra also held a Video Games Live concert, which was founded by two video game music composers at the Hollywood Bowl. This concert featured a variety of video game music, ranging from Pong to Halo 2. It also incorporated real-time video feeds that were in sync with the music, as well as laser and light special effects. Video Games Live has been touring worldwide since.
On April 20 to April 27, 2007, Eminence Symphony Orchestra, the only orchestra dedicated to video game and anime music, performed the first part of their annual tour, the “A Night in Fantasia” concert series in Australia. Whilst Eminence had performed video game music as part of their concerts since their inception, the 2007 concert marked the first time ever that the entire setlist was comprised of pieces from video games. Up to seven of the world’s most famous game composers were also in attendance as special guests.