Logo du site de la Revue d'informatique musicale - RFIM - MSH Paris Nord

From separation to integration
Strengths and weaknesses of sound-design film music

Emilio Audissino
décembre 2022

DOI : https://dx.doi.org/10.56698/filigrane.1328



Dans les pratiques compositionnelles du cinéma contemporain, les frontières entre ce que l'on appelle traditionnellement « musique » et « effets sonores » se sont resserrées et se chevauchent. Cette pratique a tiré profit d’innovations techniques en matière de reproduction sonore – Dolby, THX, DTS… – et de la convergence technologique et de la numérisation des processus de réalisation cinématographique : les films sont montés sur un poste informatique, les effets sonores sont manipulés sur un poste informatique, la musique est composée et souvent en partie jouée sur un poste informatique et la copie originale est finalisée sur un poste informatique. Alors qu'auparavant tous les ingrédients étaient produits non seulement dans des départements séparés, mais aussi selon des procédés technologiques distincts, tout passe maintenant par l'ordinateur : les différences de catégories du passé s'estompent et fusionnent dans le trajet partagé à travers des codes binaires. Hans Zimmer a été un pionnier de cette convergence technologique, anticipant (et influençant) le style contemporain de la musique de film, qui s'intègre étroitement à la conception sonore globale du film.

Les spécialistes en études de son et les designers sonores ont rendu compte des côtés positifs des forces et des opportunités de la nouvelle « bande sonore intégrée » et des échanges entre sound design et musique, en se concentrant sur les nouveaux aboutissements (loin des clichés de la composition filmique symphonique), sur l’érosion des barrières entre les catégories sonores et sur le développement de solutions créatives. Dans cet article, après avoir fourni un bref aperçu du parcours vers la bande sonore et le design sonore contemporains intégrés, et après avoir résumé les forces et les opportunités, j'aborde certaines des menaces et des risques potentiels de cette nouvelle tendance, en tant qu’ils peuvent avoir un impact sur les praticiens de la musique de film, sur les spectateurs et sur la musique de film en général. Dans mes conclusions, je propose une approche plus équilibrée qui prend en compte les tendances actuelles.


In today’s scoring practice the boundaries between “music” and “sound effects” have become thinner and overlapping. This practice has been favoured by innovations in sound technology and by the digitalisation of the post-production processes. Once all the ingredients were created not only in separate departments but through separate technological procedures, while now everything converges into digital processing. Hans Zimmer has been a pioneer of this technological convergence, anticipating (and influencing) the contemporary style of film music, one that seeks a tight integration with the film’s overall sound design. Laudatory accounts have already been offered of the new “integrated soundtrack”. In this article, after providing a short summary of the path to the contemporary integrated soundtrack and sound-design style, and after summarising the strengths and opportunities, I also provide some discussion of the potential threats and risks of this new trend; threats and risks that can have an impact on the film-music practitioners, on the viewers, and on film music in general.


Index by keyword : Film Music, Film History, Film Style, Sound Design, SWOT Analysis.

Texte intégral   


1In this article I intend to share some concerns about the current trends of cinematic sound, in particular concentrating on how the contemporary “sound-design score” facilitated by the rise of the “integrated soundtrack” has been impacting on film music. The term “integrated soundtrack” means a treatment and a conceptualisation of the three sonic components of a film’s soundtrack (dialogue, music, and sound effects) dovetailing and fusing into one another, in such a way that “sound design and music are blended into a kind of conceptual unity”1 to produce a sonic composite “that consciously combine[s] sound design and music into the overall concept and design of screen media”2. Publications on the integrated soundtrack often tend, in my view, to focus too much on the positive aspects and potentials made possible by today’s approach to sound and music. While it is certainly true that recent innovations have opened up new potentials and produced exciting and original results, I think it is equally important to point out the possible drawbacks and risks of this change. If one wishes to refer to the quadrants of the SWOT analysis (Strengths, Weaknesses, Opportunities, Threats) that is typically employed to evaluate the pros and cons when some new project is being launched3, my impression is that much too often current scholarship has favoured the highlighting of the Strengths and Opportunities and somewhat has neglected to take into account the possible Weaknesses and Threats. This is what I discuss in the following pages. I trace a brief overview of the path that led to today’s integrated soundtrack; then I focus specifically on the stylistic changes of film music; finally, I offer some remarks on how the current approach can impact on the practitioners, on the viewers, and on film music as a musical repertoire.

1. A brief overview: From separatism to integration

2Film Studies has typically been accompanied by a visual bias4. Images, the visual element, have been long considered as the essence of cinema and the most salient agent in storytelling and meaning making, despite cinema being an audiovisual art5. Cinema has been audiovisual not only since the coming of synchronised sound in the years 1926–1929, but also during the period of so-called “silent cinema”, when the projection of the moving images was already complemented by all sorts of live sound addenda6. Only in the 1980s did a new “audiovisual paradigm” begin to substantially challenge the traditional “visual bias” and audiovisual “separatism”7 that had characterised film theory and analysis8. There was advocation not only of a conception of cinema as audiovisual, but also of reciprocal integration between the sonic elements of the soundtrack.

3This turn was arguably prompted by a series of factors. In the 1970s technological innovations like the Dolby system allowed for a more detailed recording and reproduction fidelity of the diverse sound components, thus paving the way for novel, more foregrounded, and more noticeable agencies for sound9. The 1978 Brighton conference of the FIAF (Fédération Internationale des Archives du Film) inaugurated an innovative approach to the study of the early-cinema repertoire and launched new archival research to reconstruct its aesthetics, poetics, and contexts of exhibition – including sound10. In the 1970s, the qualification “sound designer” was introduced, reportedly with Walter Murch’s work on Francis Ford Coppola’s Apocalypse Now (1979)11. Sound designers were “sound mavericks who have created imaginative, suggestive and evocative soundscapes by subverting Hollywood rules of postproduction, availing themselves of the plethora of expressive possibilities provided by an elaborate sound design”, to borrow the words of Danijela Kulezic-Wilson, a pioneering sound-design researcher12.

4In the classical period there used to be a hierarchy of importance amongst the elements of the soundtrack and the three components – sound effects, music, and dialogue – were handled in separate departments by different specialists: “Usually those working in one of these areas have limited or no direct interaction with those in the others, making it worthwhile to examine each independently13. The sound-effects track was typically the third element of the soundtrack, outranked by music and dialogue. Monaural technology and analogical systems made it infeasible to handle too many tracks simultaneously in the final mix, and the intelligibility and clarity of dialogue were the primary preoccupations: “cinema is a vococentric or, more precisely, a verbocentric phenomenon”, says Michel Chion14. Consequently, sound engineers would ensure that priority should be given to dialogue. Second came the accompanying music, opportunely composed so as not to mask dialogue. Finally came sound effects, employed as a generic backdrop to set the ambience – muffled conversations and clattering of silverware to set the aural context in a restaurant scene, for example – or introduced to mark some salient visual development – for example, a gun-cocking sound to dramatically highlight some villain pulling a gun on the protagonist: “Sandwiched between equally prolix doses of dialogue and music, noises then became unobtrusive and timid, tending much more toward stylized and coded sound effects than a really fleshed-out rendering of life”, according to Chion15. With the advent of the unifying role of the sound designer and the improved creative possibilities afforded by the Dolby process, it could be argued that in the 1970s music and sound effects had reached a position of parity and equal importance to dialogue, “permitting one to hear well-defined noises simultaneously with dialogue” 16.

5The digital reconversion between the end of the 1990s and the first years of the 2000s brought in other technological novelties and further integration opportunities, in the form of digital surround-sound systems with increased detail rendering and multiple channels which could provide an enhanced and larger spectrum for sound design, and in the form of DAWs (Digital Audio Workstations) in which the post-production was now carried out through digital technology and computer software. The conversion of the industry to the digital workflow provided the crucial point of convergence that opened the doors to the contemporary integrated soundtrack, with the elision of a distinct separation between those in charge of the music and those in charge of the sound effects. The editing of the images, the compositing of the special photography, the creation of the sound effects, the editing of the dialogue, the composition of the music, and the mixing of the audiovisual components are now all executed on digital workstations, which has made all the audiovisual elements converge and integrate as never before, breaking the past departmental and technical boundaries.

6In the past, the division of roles and the lines of demarcation between the sound components were neater. Taking the example of two iconic films of the late-1970s and early-1980s, Star Wars (George Lucas, 1977) or Raiders of the Lost Ark (Steven Spielberg, 1981), we had the dialogue carrying the bulk of the storytelling ; John Williams, in charge of the music, adding charm, gravitas, swashbuckling swagger or Biblical powers to Princess Leia, Darth Vader, Indiana Jones, and the Ark of the Covenant, respectively; Ben Burtt, in charge of sound effects, devising aural identifiers that have become as iconic as these films’ images, for example Darth Vader’s respirator, the Ewoks’ language, or of a pit full of slithering snakes17. Today, everyone sits in front of a computer working with software that can handle virtually all these elements, and the distinction between the elements gets thinner: dialogue may lose signification and be treated as material noise; noise can become the carrier of meaning; music can become a “noise-like” background. More importantly, today sound effects have acquired an unprecedented salience in the mix: with the virtually endless mixing capability of audio software and the multiplication of channels and detail definition in reproduction systems, the handling of sound effects has produced architecturally complex results, bringing the surrounding and hyper-realistic aural “super-field” that Chion had discussed in the early 1990s to its full expression. Mark Kerins has rechristened this as the “ultrafield”18. Sound effects are now even given, in some cases, more importance than dialogue – as evidenced in the debate over the unintelligibility of dialogue in Christopher Nolan’s films19 – and are prioritised over the music in the mix – as can be seen in the difference between the first Star Wars trilogy and the subsequent ones20.

7Coming to music, if it is problematic to exactly pinpoint what music precisely is – “isn’t music part of sound design already? The answer, however, depends on which school of thought one might belong to”21 – it can be said that the tendency of today’s film music has been that of moving away from the Romantic dialect that characterised it in the classical period, as well as from melodic writing. In films like Dunkirk (Christopher Nolan, 2017), distinguishing between music and sound effects is difficult, and pointless: the two are designed as an integrated whole.22 Today’s dominant style of film scoring can be characterised as a hybridisation of musique concrète, techno, and minimalism, a type of composite style that perfectly marries with the foregrounded status of sound effects: music has to interlock closely with the sound effects and it is typically expected to act for them as a bed and connecting tissue. Music itself becomes sound effects, in its preference for timbral atmospheres and electronic manipulations, and sound effects become music, in the complex layering and nuances and musicalised patterns that constitute today’s sound design. Kulezic-Wilson stated that “sound design is the new score”, and rightly so23. This tactile, immersive, corporeal dimension of sound connects particularly well with today’s strand of phenomenology-influenced film and media studies – exemplified by the work of Vivian Sobchack24, for example – whose interest lies in the sensuous, textural, physical, haptical qualities of the audiovisual, and in the phenomenology and the embodiment of the viewers’ experience. Sound is an especially rich terrain for these types of studies: think of the almost inescapable bodily reactions of recoiling and jumping on one’s seat – the “startle effect”25 – triggered by loud and sudden noises such as the “stingers” used in horror films, or the pounding visceral impact of the low-frequency effects produced by subwoofers.

8The advantages of the conquered “integrated soundtrack” appear manifold. Practitioners now have the full gamut of the soundtrack’s components at their disposal, and they can blend and cross and trespass from one to the other – music as sound and sound as music – more freely and creatively than the previous technical limitations and departmental segregation would allow. Viewers can now be involved not only cognitively (comprehension of the storytelling and interpretation of the meanings) and emotively (empathetical investment in the characters and situations), but also bodily: the centrality of today’s sound makes them connect to the film not only mentally, but physically too. For phenomenology and embodiment-theories scholars, the integrated soundtrack and its sensuous qualities is a fecund research field, a sort of practical application of the theories. The move to a “sound-design style” for film scoring has introduced fresh solutions to the narrative and formal demands of contemporary cinema – from the fragmentary form of “puzzle films” such as Memento (Christopher Nolan, 2000), to the post-modern hyperreality of the Matrix trilogy (The Wachowski Brothers, 1999-2003), to the break-neck action-ridden franchises like 2 Fast 2 Furious (John Singleton, 2003) – demands that could no longer be convincingly satisfied by the traditional discursive and melody-based approaches to film scoring.

9The use of DAWs and digital technologies has also made scoring and film-making in general more accessible, both because the technical implements are now more affordable than the previous analogic technology, and because software makes it easier and swifter for virtually anyone to work on sound and video. This latter user-friendliness of digital technologies is also congenial to the contemporary “prosumer” society, where everyone can be at the same time a consumer and a producer of cultural goods, and this can be seen as a democratisation of art and creativity: for example, with DAWs and MIDI virtually anyone with software and a creative vein can compose film music or design soundscapes26. I am sure that more advantages, more Strengths and Opportunities, could be listed. Alongside these, though, also lie potential Weaknesses and Threats.

2. Weaknesses and threats: For film composers, viewers, and film music

10From the year 2000 we could individuate a new period in mainstream film-music historiography. If 1978 marked the beginning of a period that could be called “Eclectic Style” a cross-pollination and fusion of diverse idioms and musical means: symphonicism, jazz, rock, world music, minimalism…; acoustic orchestras, synthesisers, smaller ensembles … the current post-2000 period could be called “Sound Design Style”27. In 1990s, elements of pop, electronic, and minimalist music were already superseding the traditional symphonic scoring, and increasingly significant and wide-spread signs of a technological and stylistic change could be detected around the year 200028. If under the “Eclectic Style” an equilibrium seemed to have been reached between the three components of the soundtrack, in the post-2000 years, as a consequence of the digital convergence of the film-making processes that I have already mentioned, the equilibrium has tended to break down, with “music” in the traditional sense slipping down in importance to the lowest position29.

11Laurent Jullier has given the name “film-concert” to a predominant characteristic of contemporary post-modern cinema, which consists in “the prevailing of the sound dimension over the visual one: the sound track embraces the viewer and occupies the frequency spectrum almost entirely; coming out from loudspeakers, the sound track plunges the audience into a sound atmosphere from which it is impossible to escape”30. Yet, the role of music in today’s “film-concert” is not dominant. It is one thing to integrate music with sound design – as happens in the David Lynch and Angelo Badalamenti collaborations – but it is something else to reduce music to a binding substrate. Film composers, of course, have always had to take into account the other components of the soundtrack and in some sense “fight” for some moments of foregrounding: for instance, sound effects in battle scenes have long caused audibility issues for music31. However, the competition has increased exponentially as sound effects have been given priority, often resulting in over-saturated soundtracks. This might be to the detriment not only of music but of dialogue too: Richard Welsh of the Society of Motion Picture & Television Engineers commented on the increasingly recurring occurrences of drowning of the film dialogue because “with action movies, the explosions and sword fights have been made very loud as a creative decision”32. Such creative decisions clearly privilege sound effects even over dialogue clarity, as decried by some commentators in Christopher Nolan’s Tenet (2020)33.

12Over-saturation is a risk of the multi-layering opportunity provided by digital technologies: “The ability of DAWs to handle large numbers of audio tracks has unsurprisingly resulted in some overly busy and loud soundtracks. As sound designer and editor Glenn Morgan […] explains, “That’s the worst disease that a lot of editors and designers have, they just over cut and throw too many colors on the canvas, and then it all turns brown”, and another sound designer admitted that “movies are becoming sonically inappropriately loud”34. Within such an aggressive and thick wall of sound, music had to find a style that was compatible. The leading name has been that of Hans Zimmer – a frequent Nolan collaborator – a “maximal minimalist”35  composer who has devised a musical style suitable for today’s cinema:

Zimmer’s musical vocabulary is limited and mostly quite conventional; he seemingly has little interest in harmony outside the tonal Common Practice, and certainly not with genuine avant-garde or modernist idioms. His cues are often thickly scored, but without being finely wrought contrapuntally. Countermelodies are fleeting, genuine independence of lines is rare (but not nonexistent), and often the middle range of the orchestra is treated purely as a vehicle for static sustained chord tones (if it is filled at all). […] The composer’s predilection for digital augmentation can yield a distinctly overproduced sound, where every detail is manipulated somehow and the individuality of component parts is sacrificed for a holistic impression of busy loudness36.

13Even if also capable of delivering scores that are more melodic and “romantic” – like As Good As It Gets (James L. Brooks, 1997) or The Holiday (Nancy Meyers, 2006) – the Zimmer idiom that has largely influenced today’s industry is a sort of a fusion of commercial techno, minimalism, and bruitisme, this music is “felt” more than listened to; it has a strong, visceral, and immediate impact on the listener rather than discursive development or melodic, contrapuntal, and harmonic subtleties. It does not require aural space to express itself but can blend very effectively with the sound design of contemporary cinema. Zimmer has offered a stylistic solution for today’s music to cope with the thickness of sound design, but the solution seems to have become an imposition to some extent: “Zimmer’s commercial influence, which is enabled by a literal company of like-minded and well-networked composers, cannot help but institute a degree of uniformity of sound and style”37 . Standardization is as old as Hollywood, but versatility, which was one of the characteristics of the top old Hollywood composers, seems now to be disappearing to make way for “a slick but somewhat anonymous style that fits seamlessly with risk-averse corporate film-making”38. The film-scoring style of the Zimmer school, which I call “sound design style”, is a consequence of the technology-determined formulas of today’s Hollywood. With computer technology, the process of music production has considerably quickened. On the one hand, this has made composition easier. As previously mentioned, this makes music-making more affordable: “The studio fits on a laptop now”39, and the process is also made more inclusive and democratic, as practitioners who would not be able to compose in the traditional “notational” way can produce music by combining and trying out sounds in the DAWs. The threat of this is that the technological competence of a composer can become more important than their musical competence: today’s digital routines force film composers, if they want to survive, to be proficient with computer technologies first of all: “Music notation skills and traditional orchestration, while extraordinarily useful, are no longer required”40. Sometimes, moreover, too much technological help is liable to weaken creativity: “you can create something quite easily, quite good, quite quickly. Whereas before there would be more obstacles in the way of that creation… I think sometimes those obstacles made for more substantive [work] – or a learning curve that means that artists, when they got to that point were more ready”41.

14Another threat of the new technologies is that producers and executives have immediately exploited these time-saving qualities to further compress the time they grant composers to deliver the music and the sound design42. Technology influences music composition not only directly but also indirectly. Films are now edited digitally, in a non-linear workflow. In the old days a film’s work-print was assembled on a flatbed editor, cutting and splicing the physical pieces of positive filmstrip struck from the camera negatives. The edit thus obtained, which was “spotted” by the composer to prepare a cue sheet of the scenes and sequences to be scored, was a fairly stable version of the film: further corrections and last-minute changes would have been quite expensive and were done only if necessary. With the use of editing software, today the director and producers can tinker with the film edit until the very last minute, and a film is never really locked until distribution copies are prepared. This means that the related film music is likely to be subjected to repeated rewrites, lengthening, shortening, and trimming – this happened, for example, with Star Wars – Episode IX: The Rise of Skywalker (J. J. Abrams, 2019), whose final recording session for the music took place a mere three weeks before the film’s theatrical release43.

15This fluid nature of the final cut cannot but impact on the style of the music. If a musical cue written in an old-fashioned way – eight-bar melodies, phrasing, a developmental discourse, inner voicing, counterpoint, etc… – has to be modified because the film cut has changed, this may not be an easy task. A proper rewrite might take more time than allowed, and thus be impossible. Consequently, it might be necessary to modify the music on the fly – on the recording stage – by bluntly dropping some bars in the score to adjust it to the new configuration of the cut. Or, if already recorded, the music editor would chop off and paste the required segments to adjust the musical cue. In both instances, in the presence of a melodic phrasing and developmental discourse, such cuts would be noticeable “ear sores” in their brusque interruption or corruption of the musical flow: for example, an eight-bar melody might not complete its course and get truncated. Edits on the music track to fit the music more closely to the film have always happened but, given the now decidedly more unstable and unpredictable state of the final cut, it is by far more probable that the musical structure might be repeatedly tweaked until the last minute. Pre-emptively, music thus tends to be simplified in its construction, built in short self-contained blocks or thought of as musical colour more than a musical discourse – a soundscape44. This way, music is easily modifiable, either by the composer or the music editor, who can cut, shorten, or stretch it without the type of harsh aural rifts that such interventions would create on some more developmental composition.

16Given the workflow of today’s film industry, one threat posed by the “sound-design style” is that it seems to dictate the musical approach that a composer has to take: this is the type of music that can survive and integrate in contemporary soundtrack. Film composers, undoubtedly, have never had the “artistic freedom” of concert-music composers: they have always had to bend their personal idiom to the film’s demands and, first of all, serve the film rather than their artistic whims and desiderata. Yet, the current technology and the resulting saturated sound mix, potentially ever-changing editing of the film, and the tighter deadlines promoted by computer-based composition and film editing, may have a particularly coercive impact on the music composition. A flowing melody, a leitmotivic network, a theme-variations approach, would hardly find aural space within the thick layers of sound effects, and it would prove too inflexible for the ever-changing final cut. The consequence is that composers who would be trained and willing to employ more traditional compositional strategies might resign to just follow the general trend.

17Viewers might also not be as satisfied as most think they are with the creative choices that are trendy in today’s sound-design practice. I have already mentioned some criticism about how loud and saturated sound design sometimes interferes with dialogue intelligibility. This is an indicator of today’s preference on the film-makers’ part for conceptualising the film as a physical experience over a conception of cinema as something engaging on a more intellectual level. By intellectual I do not mean to refer to the intellectual “arty” film-making of, say, Jean-Luc Godard: I mean those films whose main concern is the conveying of storytelling that engages the viewers’ intellectual, cognitive activity – comprehending the plot and interpreting the meanings and connotations – as opposed to films whose priority is to engage viewers physically – the so called “4D” presentations are an extreme example. The risk of this preference for the spectacle over storytelling is that these films are designed, visually and aurally, to be experienced in a prototypical film theatre with state-of-the-art audiovisual technologies: for example, in order to pound on the viewers’ bellies and obtain the immersive effect that was intended at the mixing console, the bass in the sound design has to be emitted by an appropriate subwoofer and a calibrated surround system must be in place. However, few theatres satisfy such requirements – “even in movie theaters, it is difficult to find a sound system set up properly”45 – and more and more often people are watching films on their TV screens or even on their tablets and smartphones:

If the growth of secondary distribution channels has sparked innovations in film exhibition, it has also resulted in an audience less attuned to sound quality. […] Media is increasingly played back through television or computer speakers, over headphones, or even through tiny smart phone or tablet speakers – all systems lacking the multichannel, frequency response, and dynamics capabilities of the theatrical systems for which movies are mixed46.

18As it is highly influenced by technology, the current sound-design style is equally highly dependent on technology, that is, the intended effects can be fully appreciated only with the appropriate reproduction technology and within the appropriate viewing setting. Increasingly fewer people attend film theatres, and not everyone possesses their own home-theatre system, which would not guarantee the correct reproduction anyway:

Even many of the true multichannel systems installed in homes are not set up correctly due to aesthetic concerns, less-than-ideal room configurations, wiring issues, technical competence, and lifestyle factors (e.g., parents moving speakers out of young children’s reach). […] Given that reality, it is unlikely that more than a small percentage of even those homes with a home theater system can correctly reproduce a DSS soundtrack. Moreover, no home system can truly reproduce a soundtrack as it sounded in the theater for the same reasons that mixes done in editing suites rarely sound right: differences in room size, organization, and acoustics mean that mixes designed in and for a theater space will not sound right in a living room–sized space, and vice-versa47.

19More and more often, films are being watched on low-fi devices and in conditions that are far away from the ideal immersive experience, and yet “while theatrical box office might be a diminishing component of a film’s overall revenue, filmmakers still mix for theatrical release”48.

20The comprehension of the storytelling of those films that seek to engage viewers intellectually is less impaired by subpar watching conditions than the physical experiencing of the films seeking audiovisual spectacle. In the smartphone-on-the-subway viewing situation, one can follow and enjoy the dialogue of a Norah Ephron film anyway, or can hear and enjoy a melodic line in the film music – the discursive qualities of sound – while the intricate sound-design of Dunkirk or Tenet – the haptic and sensual qualities of sound – is mostly lost. Dialogue and “traditional” music have a gestalt of meaning that can be extracted and that remains basically intact under different experiential conditions because, to use the old communication theories, the message can be extracted notwithstanding some disturbance in the channel. With contemporary sound design, the channel is as important, if not more important, than the message, and if the channel is disturbed, then very little remains to be extracted.

21Ironically, in one of the founding texts of the audiovisual paradigm, Rick Altman lamented that the “text-oriented era” of film studies posited an abstract, bodiless viewer, more a theoretical appendix of the text than a real person in real watching contexts and situations49. In advocating a heightened attention to the physical qualities of the audiovisual experience – replacing “cinema as text” with “cinema as event”50 – Altman anticipated the interest in the embodiment of the film viewing experience that characterises much contemporary phenomenology-oriented film scholarship. Contemporary sound design, with its attention to eliciting corporeal sensations, is seen as the practical incarnation of the embodiment theories, far removed from the abstract, brain-without-a-body viewer of the textual approaches. Unfortunately, the point is missed that contemporary sound design too seems to posit a generic and abstract viewer and viewing situation – the fully-absorbed viewer in the state-of-the-art film theatre – that replicates the fallacious abstractness of the viewer posited by the textual approaches. When sound designers or embodiment-theory scholars describe the haptic experiential features of today’s films, they seem to think of the fully absorbed viewer in the state-of-the-art film theatre, not of the casual smartphone-on-the-subway viewer. From this perspective, the immersive cinema celebrated by sound designers and scholars looks a bit like a delusional theoretical construct, with few actual occasions to be experienced as it was designed to be.

22The risk is that the film industry might provide audiences with products that are actually less multiplatform than the current media convergence would require, products that can be fully appreciated by viewers only in the most expensive types of fruition contexts – the best film theatres, with the most costly admission tickets – while most people will not be able to enjoy the film to its full extent, or will even be penalised by the viewing of a film that sounds “handicapped” in their cheaper low-res viewing conditions. We might say, to rebuke the praise of the current sound-design style as more inclusive, that it might prove to be actually quite elitist, if we consider the consumer’s perspective.

23Finally, I see a potential threat also for film music itself, as a musical repertoire beyond the films. I take this to be an important point, given how long it has taken for film music to acquire credibility as a type of applied music that can also have a reputable life and stand-alone quality beyond the films51. The film-music repertoire has finally entered concert programmes more and more stably, not only in so-called “Pops concerts”, but also in regular symphonic seasons – for example, the Los Angeles Philharmonic, the Philadelphia Orchestra, the San Francisco Symphony – and with orchestras that once would not dignify film music with any type of attention – recent examples are the Berliner Philharmoniker and the Wiener Philharmoniker. It is now widely acknowledged that film music can be considered “the popular interface with the world of orchestras. It’s kind of like opera was at the beginning of the seventeenth century, or like ballet scores during the nineteenth century”, in the words of conductor Keith Lockhart52. Even in academic circles the conundrum over the legitimacy of separating music specifically written as a complement for a film from the film itself53 has found some solution in the fruition modality called “cinematic listening”54. In cinematic listening, the listeners mentally summon some visual accompaniment to a music piece, be it one that is randomly evoked in the listener who is not familiar with the parent film, or one that calls up to the mind the atmospheres and visuals of the well-known film. Frank Lehman has recently defended the legitimacy of “film music as concert music” employing this theoretical framework55.

24Besides “film music as concert music”, another very topical and increasingly popular mode of concert presentation of the film-music repertoire is the “multimedia film” or “cine-concert”56. Today, we see a skyrocketing increment of the number for these events: “As of July of 2019, at least ninety films have been concertized for live performance, with the greatest number yet (eighteen) premiering in 2017”57. Here, film music is presented with the film it was created for: an orchestra plays the film score live as the film is projected, and this is a way to recover the music/visual synchronisation specific to film music. A first advantage of these screenings is the special foregrounding given to the music, which allows the audience to better appreciate its nuances: “In the tracks on the old movies the orchestra is so muddy and it is so hard to hear it. When you look at the score of The Wizard of Oz, for instance, the orchestration is ingenious, but you could never tell that from listening to the low-res sound of the old soundtrack”58. More importantly, it is the live performance that gives these shows a particular allure. Unless the projector breaks down, everything in a film screening is predetermined, nothing can go wrong; with a live musical performance, an element of unpredictability, of human fallibility, and of highly trained skill to face demanding performance circumstances – “many of these film scores were never intended to be performed in one sitting. They are potentially as taxing as the operas of Richard Wagner”59 – is what adds a sense of excitement: it is a demanding performance and a difficult audiovisual synchronisation unfolding before one’s eyes. Multimedia films are also becoming more and more numerous because they are quite profitable for orchestras,60 and they have become “a potent tool in cultivating new concertgoers”61.

25The film music in a sound-design style poses some threats for this newly acquired acceptance of the film music repertoire. Firstly, it is questionable that the best examples of the recent musique-concrète sound-design scores consisting of a layering of diverse sound effects can be considered the “popular interface with the world of orchestras”. This type of music is more suited to electronic-music events and one of its main scopes is precisely to go beyond acoustic orchestras. Also, it seems more difficult to apply the narrative-seeking modality of cinematic listening to this sound-design music. Cinematic listening tends to require music with a stronger formal orientation that can suggest some programme and storytelling developments within its course. Like avant-garde music, sound-design film scores consist principally of the secondary statistical parameters, to use Leonard B. Meyer’s categories, and the primary syntactical parameters are virtually absent, which makes it harder for listeners to orient themselves within the musical development62. From this type of contemporary composition practice, we can expect few if no contributions to be added to the newly-born concert repertoire of film music.

26The same can be said of the potential new entries in the catalogue of multimedia films. There are two conditions that are singled out for a film score to be suitable for a live-to-film presentation. One is “a film’s ability to attract orchestras, presenters, and audiences. The result is that several blockbusters have received cine-concert treatment”63. This poses no problems, as many of today’s blockbuster films have a score that is written in the sound-design style. Yet, the other condition is the “appeal as a symphonic work. [The producers of cine-concerts] strive to ensure that the music is as compelling for world-class musicians to perform as it is for audiences to hear”64. This is hardly the case with the sound-design style, which specifically avoids traditional symphonism in favour of a blend of acoustic orchestra and electronics, but also, more importantly, which in most cases is not even notated in the traditional way. Much of today’s film music is not composed on a score but in the form of sound files, so should it be performed live, the performance materials for the orchestra would not even exist and a substantial additional work would be required to turn the sound-design score into a symphonic playable score. For example, when Gladiator (Ridley Scott, 2000) was selected as one of films to be presented with live musical accompaniment, its music had to be substantially adapted for the live symphony orchestra: “[The arranger] spent weeks transcribing the multiple layers of percussion because so little of it had actually been written down”65.

27Even if some score is available or is created for the live event, the integrated nature of the sound-design scores – planned to interact and fuse with the sound effects and overall sound design of the film – tends to make the soundscape-like music unremarkable and/or unsuitable from a concert-performance perspective: “There are difficulties because of the inadequacy of the material. Occasionally you see things […] that are simply unplayable. I mean, it might be assemblable in a studio environment where you can patch soundtrack cues together from many takes into one. Or they’re demanding just because they are exhaustively repetitive”, comments Keith Lockhart66. Suppose that a solution could be not that of transforming the sound-design score into a spurious symphonic transcription but instead to retain its nature of a musical event rather than a musical text, and hence accompanying the film live with DAWs and moment-by-moment mix of pre-recorded elements – as in a live electronic-music performance, or as in “The World of Hans Zimmer” concert tour. In this hypothesis, an electronic score performed live by a sound designer through a venue’s sound system would not be much different from watching a film with its synchronised soundtrack heard through the loudspeakers of a film theatre. The weight of the human live-performance ingredient would be significantly diminished compared to that of a ninety-people-strong acoustic orchestra keeping in synch with the film projection: “The most fascinating aspect of the cine-concert is the human-technology interface demanded by the amalgamation of live and recorded media”67, and this aspect would not be present to such an extent in the case of a live/pre-recorded sound-design score. For those also interested in the extrafilmic life of film music, at a time when film music has finally succeeded in gaining this status, the fact that contemporary film music is not likely to provide suitable new entries for the “film music as concert music” repertoire can be seen as a non-negligible threat.


28In this article I have given an account not only of the novel opportunities and the strengths of the sound design style that characterises contemporary mainstream cinema but also of the potential weaknesses and threats, for film-music practitioners (a technology-imposed standardisation of the style and practices), film viewers (final mixes that pursue an idealised viewing/listening situation that rarely happens in real life), and for the repertoire of film music itself (as film music has finally been admitted into the symphonic concert programmes, the risk is that fewer new films are released that possess both the blockbuster-status able to draw audiences to film concerts and a score that is suitable for such symphonic presentations).

29As a final remark, it seems to me that one reason for the one-sided accounts of the new sound design style might be seen in some politicisation of the debate, as has happened to some art manifestations from the past within the current trend of the “Cancel Culture”. Matt Sakakeeny has written that “music is an idea, not just a form, and like any other idea, music is a problem”68. If something is a problem, it can be either discussed or, more conveniently, cancelled. Instead of looking more closely at the multiple facets of the object under discussion, the tendency seems to be one of radical polarisation: all that is contemporary is good and progressive, all that is from the past is bad and regressive. For example, in an article titled “An Ode to Incoherent Canons”, the authors state:

In this collaborative response, we write and perform against the hegemony of academic canons. We draw on embodied, performative, and poetic means to interrogate canonical prejudice, canonical domination, and canonical exception as they emerge in and are adjacent to our own academic careers. In so doing, we seek to disrupt normative modes of knowledge production and argue for a futurity that is incoherent69.

30Note the reference to “embodiment” in the quoted text, which we have seen being a keyword in the aesthetics of the sound-design scores. Surely there has been a strong element of arbitrariness and white-centrism and Euro-American focus in the Western canons, but I think that there might be other more articulated approaches besides getting rid of all the canons and systematisations and just advocate incoherence. Western “classical” music is one of those artefacts relying on canons. Symphonic music is characterised by a strong canonisation of its works, and in our times this and other aspects associated with “music” in the traditional sense can lead to it being identified as something politically incorrect and conservative: “Sound studies also came of age after relativism, multiculturalism, and popular culture studies had begun to dismantle the canons and hierarchies that music studies had helped construct”70. In an age of post-colonial studies, “music” and its elitist canon may be seen as representatives of the oppressive colonial past: “within the geopolitics of capitalist empire, music studies provided the basis for Eurocentric claims of cultural superiority, while music of others from elsewhere was evaluated in negative relation, as a foil for the Enlightenment project and as fodder for rationalizing colonization”71.

31Traditionally notated music has also a mathematical aspect to it, and a scientific type of rigorous systematisation: “music was also integral to the systematization of science: sonically, pitches were identified, standardized, and differentiated, while textually they were named, assigned visual symbols, and inscribed in a graphical system of notation”72. Another trait of our current culture is a distrust of science, if not anti-scientific ideologies altogether – for example, the reasoning might go like this: if racism stems from racial categorisation, then science is to blame for racism, because science introduced such categorisations. The consideration of “music” might thus also suffer from its association with scientific methods. So, if the traditional notation-based Western musical system represents colonialist oppression, scientific normativeness and categorisation, and elitist privilege, then the response is to replace “music” with the less connoted and more innocent term “sound”, and to embrace the more incoherent “noises” in lieu of the scientifically systematised music theory. David Novak explains that “noise is typically separated from music on the grounds of aesthetic value. Music is constituted by beautiful, desirable sounds, and noise is composed of sounds that are unintentional and unwanted”73. Noise and sound, the “undesirable”, can take the opposite place of “music”, representing a sort of a counterculture:

Noise retained its status as a marker of difference in postcolonial, multicultural, and cosmopolitan societies, it also became a powerful term of cultural agency. In contemporary projects of resistance, noise is the “voice” of subaltern identity on the margins, where “bringing the noise” is not accidental but an expressive practice and a deliberate act of subversion. […] Noise is a powerful antisubject of culture, raising essential questions about the staging of human expression, socialization, individual subjectivity, and political control74.

32Similarly, traditional symphonic film music may be seen as old and regressive, and hence the new noise/sound-based soundtracks constitute the progressive counteraction to take and to advocate. Some of the language used to discuss the new sound design vis-a-vis the traditional music bears some traces of passionate activism: “All these soundtracks are as much statements of refusal to comply with the conventions of manipulative scoring as they are active agents in forming interrogative forms and challenging the perceiver to accept a contract of engagement rather than one of passive absorption”75. Old scoring is “manipulative” and based on “passive absorption”, while the new style constitutes “statements of refusal” of “active agents”. Another example sides sympathetically with noises as one would side with politically oppressed minorities: “Noises, those humble footsoldiers, have remained the outcasts of theory, having been assigned a purely utilitarian and figurative value and consequently neglected”76. There is an “emancipation of the sound effects”77. I simply think we should be wary of polarisations, of discarding symphonic film music as necessarily passé and regressive while undiscerningly lauding contemporary sound-design as self-evidently exciting and progressive.

33Some might object that today’s situation is nothing new because updates of the musical style have happened before. Today’s film music has injected novelties that make it more attuned to the times and type of film-making, something that had already happened with the “Modern Style” of the 1960s, when “traditional” symphonic film music had been ousted by more up-to-date idioms. I do not think it is the same situation, though. After the “pop craze” of the 1960s – for example, the compilation score of The Graduate (Mike Nichols, 1967) or the “pop scoring” of Shaft (Gordon Parks, 1971)78 – the traditional symphonism made a come-back because pop scoring and compilations eventually proved less effective than symphonic scoring for film storytelling – a song, with its lyrics and repeats, is less flexible than instrumental music, for example. Traditional symphonic scoring was more malleable, more “film-friendly”. Sound-design scores, unlike those of the “Modern Style”, are film-friendly and malleable – even more malleable for today’s technology – and they are the perfect counterpart for films that emphasise immersive sensorial spectacle.

34A return to more traditional “music” and symphonism, as happened in the 1970s, seems much less probable under the technology and poetics of today’s film-making. Some might see this as something positive: getting rid of the past symphonism, with all its clichés and regressive connotations, and make way to the new sound design, a more egalitarian condition in which the previous classifications that favoured “music” over “noise” are dissolved79. Though, under the pre-2000 “Eclectic Style” there had already been innovations and a wider variety of idiomatic options, not only symphonism: synths, pop, world music, jazz, blues, atonality, minimalism, noise music, a more creative role for sound designers… In those years, John Williams coexisted with Vangelis and Quincy Jones and Dave Grusin and Ennio Morricone and John Barry, just to name a few. The threat of today’s sound-design style is that of imposing a single uniform style and of enforcing conformity, with the risk of narrowing the idiomatic diversity and, in the long run, of discouraging creativity.


1 James Buhler and Hannah Lewis (eds.), Voicing the Cinema: Film Music and the Integrated Soundtrack, Chicago, University of Illinois Press, 2020, p. 1.

2 Danijela Kulezic-Wilson, “Sound Design and Its Interactions with Music: Changing Historical Perspectives”, in Miguel Mera, Ronald Sadoff, and Ben Winters (eds.), The Routledge Companion to Screen Music and Sound, New York, Routledge, 2017, p. 3.

3 Marilyn M. Helms, Judy Nixon, Exploring SWOT analysis – where are we now? A review of academic research from the last decade, in Journal of Strategy and Management, vol. 3 no 3, 2010, p. 215-251.

4 Kathryn Kalinak, Settling the Score: Music and the Classical Hollywood Film, Madison, WI, University of Wisconsin Press, 1992, p. 20-39.

5 On this, see Michel Chion, Film: A Sound Art, trans. Claudia Gorbman, New York, Columbia University Press, 2009.

6 The sonic aspects of early cinema, added live to the film screenings, are thoroughly investigated in Rick Altman, Silent Film Sound, New York, Columbia University Press, 2004.

7 Emilio Audissino, Film/Music Analysis. A Film Studies Approach, Basingstoke, Palgrave Macmillan, 2017, p. 45-63.

8 Some of the early texts of the audiovisual paradigm are Rick Altman (ed.), Yale French Studies, no 60, 1980 ; Michel Chion, Audio-Vision. Sound on Screen [1990], trans. Claudia Gorbmam, New York, Columbia University Press, 1994 ; Rick Altman (ed.). Sound Theory. Sound Practice, London/New York, Routledge, 1992 ; Elisabeth Weis and John Belton (eds.), Film Sound: Theory and Practice, New York, Columbia University Press, 1985.

9 This is thoroughly investigated in Jay Beck, Designing Sound. Audiovisual Aesthetics in 1970s American Cinema, New Brunswick, NJ/London, Rutgers University Press, 2016.

10 Eileen Bowser, The Brighton project: An introduction, Quarterly Review of Film Studies, vol. 4 no 4, 1979, p. 509-538.

11 Liz Greene and Danijela Kulezic-Wilson, “Introduction”, in Liz Greene and Danijela Kulezic-Wilson (eds.), The Palgrave Handbook of Sound Design and Music in Screen Media. Integrated Soundtracks, London, Palgrave Macmillan, 2016, p. 3.

12 Danijela Kulezic-Wilson, “Sound Design is the New Score”, in Music, Sound, and the Moving Image, vol. 2 no 2, 2008, p. 127.

13 Mark Kerins, “The Modern Entertainment Marketplace, 2000-Present, in Kathryn Kalinak (ed.), Sound. Dialogue, Music, and Effects, New Brunswick, NJ, Rutgers University Press, 2015, p. 134.

14 Chion, Audiovision, op. cit., p. 5.

15 Ibid., p. 146.

16 Ibid., p. 147.

17 On Burtt’s work on Star Wars, see Gianluca Sergi, “Tales of the Silent Blast: Star Wars and Sound”, in Journal of Popular Film and Television, vol. 26, no 1, 1998, p. 12-22.

18 Chion, Audio-Vision, op. cit., p. 73. Mark Kerins, Beyond Dolby (Stereo); Cinema in the Digital Sound Age, Bloomington, IN, Indiana University Press, 2011, p. 92. Contemporary sound systems are also discussed in Miguel Mera, “Towards 3-D Sound: Spatial Presence and the Space Vacuum”, in Danijela Kulezic-Wilson and Liz Greene (eds.), The Palgrave Handbook of Sound Design and Music in Screen Media, London, Palgrave Macmillan, 2016, p. 91-111.

19 Danijela Kulezic-Wilson, “Musically Conceived Sound Design, Musicalization of Speech and the Breakdown of Film Soundtrack Hierarchy”, in Greene and Kulezic-Wilson (ed.), The Palgrave Handbook of Sound Design and Music in Screen Media, op. cit., p. 429.

20 See Emilio Audissino, The Film Music of John Williams. Reviving Hollywood’s Classical Style, Madison, WI, University of Wisconsin Press, 2021, p. 249-254.

21 Danijela Kulezic-Wilson, “Sound Design and Its Interactions with Music”, op. cit., p. 128.

22 On Dunkirk, see Emmanuelle Bobée, “La partition sonore et musicale de Dunkerque (C. Nolan, 2017). Une expérience sensorielle inédite”, in Chloé Huvet (ed.), Création musicale et sonore dans les blockbusters de Remote Control. Revue musicale OICRM, vol. 5, no 2, November 2018, p. 125-148, https://revuemusicaleoicrm.org/rmo-vol5-n2/dunkerque/. This erosion of categories, aesthetically, is not entirely new: past examples in which sounds were used in a musical way are Ballet mécanique (Dudley Murphy, Fernand Léger, 1924), Forbidden Planet (Fred M. Wilcox, 1956), Planet of the Apes (Franklin J. Schaffner, 1968), The Birds (Alfred Hitchcock, 1963), or the novachords and theremins employed in Sci-Fi scoring to add otherworldy sonorities, just to cite a few instances. There was also an increased use of synthesisers in the 1980s as a stylistic alternative to the acoustic orchestra, and considering “music” and “sound effects” on the same level as interchangeable ingredients of film scoring became more customary. The difference is that what once used to be an exception or a stylistic option amongst others now seems to have become the industry’s almost inevitable norm.

23 Danijela Kulezic-Wilson, Sound Design is the New Score: Theory, Aesthetics, and Erotics of the Integrated Soundtrack, Oxford/New York: Oxford University Press, 2019.

24 Vivian Sobchack, The Address of the Eye: A Phenomenology of the Film Experience, Princeton, NJ, Princeton University Press, 1992, and Id. Carnal Thoughts: Embodiment and Moving Image Culture, Berkeley/Los Angeles, University of California Press, 2004.

25 Robert Baird, “The Startle Effect: Implications for Spectator Cognition and Media Theory”, in Film Quarterly, vol. 53, no 3, 2000, p. 12-24.

26 Kerins, “The Modern Entertainment Marketplace, 2000-Present”, op. cit., p. 142.

27 Both terms are used in Emilio Audissino, John Williams and Contemporary Film Music”, in Lindsay Coleman and Joakim Tillman (eds.), Contemporary Film Music. Investigating Cinema Narratives and Composition, Basingstoke, Palgrave Macmillan, 2017, p. 221-236. Eclectic style is used after James Wierzbicki’s Eclecticism – James Wierzbicki, Film Music: A History, New York, Routledge, 2009, p. 209-227. Sound-design Style is after Danijela Kulezic-Wilson’s seminal article Sound Design is the New Score, op. cit.

28 2000 is also chosen as a landmark in James Buhler and David Neumeyer, Hearing the Movies. Music and Sound in Film History, Oxford/New York, Oxford University Press, 2015, 2nd ed., chap.13 and 14. For one definition of traditional” symphonic scoring, see Audissino, The Film Music of John Williams, p. 37-54.

29 Though it might seem intuitively straightforward, the definition of what music is is a complex one: see Matt Sakakeeny, Music”, in David Novak and Matt Sakakeeny (eds.), Keywords in Sound, Durham, NC/London, Duke University Press, 2015, p. 112-124.

30 Laurent Jullier, L’écran post-moderne : un cinéma de l’allusion et du feu d’artifice, Paris, L’Harmattan, 1997, trans. by Carla Capetta as Il cinema postmoderno, Turin, Kaplan, 2006, p. 37. The English translation is mine.

31 See Miklós Rózsa quoted in Roger Manvell, John Huntley, The Technique of Film Music, New York, Hastings House, 1975, 2nd ed., p. 230.

32 Quoted in Michael Hsu, Movie Dialogue Too Quiet? Tips for Better Clarity, The Wall Street Journal, 24 July 2015, http://www.wsj.com/articles/movie-dialogue-too-quiet-tips-for-better-clarity-1437743779#, consulted in May 2021.

33 Leo Barraclough, Christopher Nolan’s Use of Sound on ‘Tenet’ Infuriates Some, Inspires Others, Variety, 2 September 2020, https://variety.com/2020/artisans/news/christopher-nolan-tenet-sound-problems-audio-mix-1234755898/, consulted in May 2021. The same controversy was raised with Interstellar, see Chloé Huvet, “Interstellar de Hans Zimmer : plongée musicale au cœur des drames humains, par-delà l’infiniment grand. Pour une autre approche de l’esthétique zimmérienne”, in Chloé Huvet (ed.), Création musicale et sonore dans les blockbusters de Remote Control. Revue musicale OICRM, vol. 5, no 2, November 2018, p. 103-124, https://revuemusicaleoicrm.org/rmo-vol5-n2/interstellar/, consulted in May 2021.

34 Kerins, The Modern Entertainment Marketplace, 2000-Present, op. cit., p. 135-136. One could also note, in our times of heightened attention to diversity and inclusiveness, the disregard that the gratuitously loud sound-mixes of mainstream cinema show for individuals on the autism spectrum, in times in which even sensory friendly concerts are being offered to cater to this segment of the audience too.

35 Frank Lehman, Manufacturing the Epic Score: Hans Zimmer and the Sounds of Significance, in Stephen C. Meyer (ed.), Music in Epic Film: Listening to Spectacle, New York, Routledge, 2016, p. 27-55. On Zimmer, see also Chloé Huvet, “Interstellar de Hans Zimmer : plongée musicale au cœur des drames humains, par-delà l’infiniment grand. Pour une autre approche de l’esthétique zimmérienne”, in Chloé Huvet (ed.), Création musicale et sonore dans les blockbusters de Remote Control. Revue musicale OICRM, vol. 5, no 2, November 2018, p. 103-124, https://revuemusicaleoicrm.org/rmo-vol5-n2/interstellar/, consulted in May 2021.

36 Lehman, “Manufacturing the Epic Score”, op. cit., p. 32.

37 Ibid., p. 33.

38 Buhler and Neumeyer, Hearing the Movie, op. cit., p. 467.

39 Rosamund Davies and Gauti Sighthorsson, Introducing the Creative Industries. From Theory to Practice, Los Angeles and London, Sage, 2013, p. 148.

40 Kerins, The Modern Entertainment Marketplace, 2000-Present, op. cit., p. 142.

41 A freelance Artist and Repertoire Manager cited in Davies and Sighthorsson, Introducing the Creative Industries, op. cit., p. 148.

42 Kerins, The Modern Entertainment Marketplace, 2000-Present, op. cit., p. 139.

43 Audissino, The Film Music of John Williams, op. cit., p. 253.

44 To re-state what I wrote in note 22, in order to be extra clear, colouristic writing and modular writing are not new either: for example, Jerry Goldsmith produced an avant-garde score for the already mentioned Planet of the Apes that is not melodic but bruitisme-based, and Bernard Herrmann’s own idiom was typically modular compared to the long melodic lines of other film composers like Max Steiner. I do not claim that colourism and modular writing had never existed before but that, before, they were options within a larger and more diverse stylistic paradigm.

45 Kerins, The Modern Entertainment Marketplace, 2000-Present, op. cit., p. 153.

46 Ibid., p. 154-155.

47 Ibid., p. 153-154.

48 Ibid., p. 154.

49 Rick Altman, “General Introduction: Cinema as Event”, in Rick Altman (ed.), Sound Theory. Sound Practice, New York/London, Routledge, 1992, p. 7.

50 Ibid., p. 14.

51 See Emilio Audissino, Film music in concert. The pioneering role of the Boston Pops Orchestra, Cambridge/New York: Cambridge University Press, 2021, p. 1-4; and Frank Lehman, Film‐as‐Concert Music and the Formal Implications of ‘Cinematic Listening’”, in Music Analysis, no 37, 2018, p. 7-46.

52 Emilio Audissino and Frank Lehman, John Williams Seen from the Podium. An Interview with Maestro Keith Lockhart, in Emilio Audissino (ed.), John Williams. Music for Films, Television, and the Concert Stage, Turnhout, Brepols, 2018, p. 402-403.

53 Ben Winters, “Catching Dreams: Editing Film Scores for Publication”, in Journal of the Royal Musical Association, vol. 132, no 1, 2007, p. 115-140.

54 Michael Long, Beautiful Monsters: Imagining the Classic in Musical Media, Berkeley/Los Angeles, University of California Press, 2008, p. 7-8.

55 Lehman, “Film‐as‐Concert Music”, op. cit., p. 10-17.

56 Emilio Audissino, “Film Music and Multimedia: An Immersive Experience and a Throwback to the Past”, in Patrick Rupert-Kruse (ed.), Jahrbuch Immersiver Medien 2014: Sounds, Music & Soundscapes, Marburg, Schüren Verlag, 2014, p. 46-56, and Brooke McCorkle Okazaki, “Liveness, Music, Media: The Case of the Cine-Concert”, in Music and the Moving Image, vol. 13, no 2, 2020, p. 3-24.

57 McCorkle Okazaki, “Liveness, Music, Media”, op. cit., p. 9.

58 Audissino and Lehman, John Williams Seen from the Podium, op. cit., p. 403.

59 McCorkle Okazaki, “Liveness, Music, Media”, op. cit., p. 10.

60 Jon Burlingame, “Live Movie Concerts A Cash Cow for Orchestras”, in Variety, 29 April 2015, https://variety.com/2015/music/features/live-movie-concerts-a-cash-cow-for-orchestras-1201483456, consulted in May 2021.

61 McCorkle Okazaki, “Liveness, Music, Media”, op. cit., p. 3.

62 Leonard B. Meyer, Emotion and Meaning in Music, Chicago, University of Chicago Press, 1956, p. 158-166.

63 McCorkle Okazaki, “Liveness, Music, Media”, op. cit., p. 10.

64 Ibid.

65 Burlingame, Live Concerts a Cash Cow for Orchestras, op. cit..

66 Audissino and Lehman, John Williams Seen from the Podium, op. cit., p. 398-399.

67 McCorkle Okazaki, “Liveness, Music, Media”, op. cit., p. 11.

68 Sakakeeny, Music, op. cit., p. 113.

69 Benny Lemaster and Amber L. Johnson, An Ode to Incoherent Canons, in Departures in Critical Qualitative Research vol. 8, no 4, 2019, p. 57.

70 Sakakeeny, Music, op. cit., p. 113.

71 Ibid., p. 117.

72 Ibid., p. 114.

73 David Novak, “Noise”, in Novak and Sakakeeny, Keywords in Sound, op. cit., p. 126.

74 Ibid., p. 130-133.

75 Kulezic-Wilson, Sound Design and its Interactions with Music, op. cit., p. 136.

76 Chion, Audiovision, op. cit., p. 151.

77 Kulezic-Wilson, Sound Design and its Interactions with Music”, op. cit., p. 132.

78 On pop music in cinema, see Jeff Smith, The Sound of Commerce. Marketing Popular Film Music, New York, Columbia University Press, 1998.

79 Here are some positive views expressed about today’s new style: This pursuit of sensations […] may well be one of the most novel and strongest aspects of current cinema. To the detriment, as some object, of delicacy of feeling, intelligence of screenwriting, or narrative rigor? Probably. But didn’t the much-admired films of the old days, for their part, achieve their emotional force and dramatic purity at the expense of yet something else – of “sensation” for example, when in reproducing noises they gave us an inferior and stereotyped sensuality?, Chion, Audiovision, op. cit., p. 154-55; Gianluca Sergi, “In Defence of Vulgarity: The Place of Sound Effects in the Cinema, in Scope: An Online Journal of Film Studies, no 5, June 2006, https://www.nottingham.ac.uk/scope/documents/2006/june-2006/sergi.pdf, consulted June 2021; Why should the sounds of birds or a stream be more musical than say an air vent, a train, or the wind? And if these sounds can be considered music, as they have been since John Cage transformed our understanding of music in the second half of the twentieth century, then are there really any differences between noise and music? Or noise, sound and music? As David Hendy suggests, ‘one person’s irritating din is another person’s sweet music’”, Liz Greene, From Noise: Blurring the Boundaries of the Soundtrack”, in Kulezic-Wilson and Greene, The Palgrave Handbook of Sound Design and Music in Screen Media, op. cit., p. 26.


Emilio Audissino, «From separation to integration », Filigrane. Musique, esthétique, sciences, société. [En ligne], Numéros de la revue, Musique et design sonore dans les productions audiovisuelles contemporaines, Hybridités musico-sonores : repenser l’écoute cinématographique, mis à  jour le : 06/12/2022, URL : https://revues.mshparisnord.fr:443/filigrane/index.php?id=1328.


Quelques mots à propos de :  Emilio Audissino

Emilio Audissino is Associate Professor in Media and Audiovisual Production at Linnaeus University, Sweden, a film historian and film musicologist. He is the author of The Film Music of John Williams (2014, second edition 2021), of Film/Music Analysis. A Film Studies Approach (2017), and the editor of John Williams. Music for Films, Television, and The Concert Stage (2018). He has recently co-edited (with Emile Wennekes) the Palgrave Handbook of Music in Comedy Cinema, projected for publication in 2023.Associate Professor Linnaeus University, Sweden. emilio.audissino@lnu.se https://www.emilioaudissino.eu/