Information

Is there any chromatic analogue of the Shepard tone?

Is there any chromatic analogue of the Shepard tone?



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Is there any chromatic analogue of the Shepard tone? I am imagining that each pure tone would correspond to a pure wavelength of light, transformed in such a way that the middle of three consecutive octaves would correspond to the peak sensitivity of the R, G, and B cones respectively. Would the light have the corresponding description of "always getting bluer (redder)"? If, say, one had been spending a long time immersed in a game like this, would it create the illusion of accelerating toward (away) without actually moving? Any references to similar ideas?


One similar idea is the Blue Diamonds Optical Illusion, is a series of identical things that appear to be a series of darker and darker and darker things, indefinitely. It shows that the Cornsweet illusion can be repeated over and over again. This reminds me of the way the Shepard tone plays the same thing over and over again, but the pitch seems to get higher and higher.

I suspect a similar approach might be able to construct an illusion that appears to be "always getting bluer".


Peter Tse's Infinite Regress Illusion creates the illusion of a stimuli that is continually moving away from a target.


Shepard tone

A Shepard tone, named after Roger Shepard, is a sound consisting of a superposition of sine waves separated by octaves. When played with the bass pitch of the tone moving upward or downward, it is referred to as the Shepard scale. This creates the auditory illusion of a tone that seems to continually ascend or descend in pitch, yet which ultimately gets no higher or lower. [1]


Staged Crime Scenes–Literature and Types

Separating Staging Behaviors by Intent

Although these red flags do exist, and help investigators/profilers determine between staged and genuine scenes, it was also found in the author’s research that red flags differ depending on what type of scene the offender is seeking to stage. That is, offenders whose intention was to stage a certain type of scene carried out behaviors that were different to those with other intentions. As such, the importance of separating out each type of staging from the others is highlighted. In that regard, cases may be classified into categories based on the staging behaviors present in the case and the statements the witness made to authorities investigating the death. For example, if the victim’s body is placed at the bottom of the stairs and the witness tells police that he or she saw the victim fall down the stairs, but the medical examiner testifies the victim died as a result of a gunshot, the case is classified as a staged accidental death because it is clear the offender is trying to make the scene present as such. Cases can thus be separated based on the aim of the staging, and each type studied in more depth. The most prevalent categories of staging in the author’s sample were Burglary/Home Invasion, Suicide, Car Accident, Accidental Death, Sexual Homicide, and Self-Defense Homicide. In the descriptive analysis done in the author’s research it was apparent that each type indeed showed differing characteristics, although there was some overlap between them. It is not feasible in this chapter to outline all the red flags and all the differences within and between each type (for such an in-depth discussion see Ferguson, 2010 ). However, it is of import to acknowledge that staging with different goals takes place with different frequencies and, as such, it is necessary to outline how often various styles of crime scenes were staged in this sample.

The most likely type of staging present in this sample was Burglary/Home Invasion (43.3%). The next most frequent types of scene staged were suicide (12.8%), car accident (12.1%), and accidental death (11.3%). Sexual homicides and self-defense homicides were next most frequently staged, although they were not particularly common (5% and 4.3%, respectively). It should also be noted that an approximately equal number of cases (4.3%) had an unknown goal behind the staging, and the staging behaviors may have been carried out nonspecifically or simply to confuse. Drug-related homicides, executions, and stranger attacks were equally unlikely (1.4% each), and frame-ups, natural deaths, hate crimes, and carjacking/robberies were the least likely, each occurring in less than 1% of cases (0.7% each).

Although every type of staging cannot be discussed in detail, it is possible to broadly discriminate between staged legitimate deaths and staged illegitimate deaths as per the author’s study. This discrimination was made based on a Multi-Dimensional Scaling (MDS) technique that was utilized, and indicated that the biggest difference in the actions carried out by offenders at a crime scene depended upon whether they were seeking for the scene to appear as a legitimate death (i.e., where a crime had not occurred) or an illegitimate death (i.e., where a crime had occurred, but they were not personally involved). It is this discriminating factor that made up the final basic typology of staging:

Staged Legitimate Deaths A.

Staged Illegitimate Deaths

Each type will be discussed in turn, with a case example.


Understanding the McGurk Effect: What you see and hear aren't the same thing

Amazingly, the way we visualize the world has an important impact on the things that we hear. This is neatly illustrated by one of the most famous auditory illusions: the McGurk Effect. Named after one of its discoverers (the British psychologist Harry McGurk), the effect is particularly common in conversation. It turns out that the way words are formed by the speaker is as important in how they are perceived as the sound that they make. So, if you shape your mouth in two different ways, but say the same word, viewers may well hear two separate sounds.

Below you can see an example of the McGurk Effect:

It’s hard to recreate this in real life. When you say words, your mouth tends to make very similar shapes, but skilled tricksters can use the McGurk Effect to fool people into hearing something they really can’t. The key to what’s actually happening is inside the hearer’s brain. We don’t just absorb sound waves and turn them instantly into sound, like a radio microphone might do. Instead, our brains anticipate and interpret what they see and hear, taking short cuts to produce the perception of sound. When what we see and hear don’t match up, illusions can result.


Contents

Microtone Edit

Microtonal music can refer to any music containing microtones. The words "microtone" and "microtonal" were coined before 1912 by Maud MacCarthy Mann in order to avoid the misnomer "quarter tone" when speaking of the srutis of Indian music. [3] Prior to this time the term "quarter tone" was used, confusingly, not only for an interval actually half the size of a semitone, but also for all intervals (considerably) smaller than a semitone. [4] [5] It may have been even slightly earlier, perhaps as early as 1895, that the Mexican composer Julián Carrillo, writing in Spanish or French, coined the terms microtono/micro-ton and microtonalismo/micro-tonalité. [6]

In French, the usual term is the somewhat more self-explanatory micro-intervalle, and French sources give the equivalent German and English terms as Mikrointervall (or Kleinintervall) and micro interval (or microtone), respectively. [7] [8] [9] [10] "Microinterval" is a frequent alternative in English, especially in translations of writings by French authors and in discussion of music by French composers. [11] [12] [13] In English, the two terms "microtone" and "microinterval" are synonymous. [14] The English analogue of the related French term, micro-intervalité, however, is rare or nonexistent, normally being translated as "microtonality" in French, the terms micro-ton, microtonal (or micro-tonal), and microtonalité are also sometimes used, occasionally mixed in the same passage with micro-intervale and micro-intervalité. [6] [15] [16] [17]

Ezra Sims, in the article "Microtone" in the second edition of the Harvard Dictionary of Music defines "microtone" as "an interval smaller than a semitone", [18] which corresponds with Aristoxenus's use of the term diesis. [19] However, the unsigned article "Comma, Schisma" in the same reference source calls comma, schisma and diaschisma "microintervals" but not "microtones", [20] and in the fourth edition of the same reference (which retains Sims's article on "Microtone") a new "Comma, Schisma" article by André Barbera calls them simply "intervals". [21] In the second edition of The New Grove Dictionary of Music and Musicians, Paul Griffiths, Mark Lindley, and Ioannis Zannos define "microtone" as a musical rather than an acoustical entity: "any musical interval or difference of pitch distinctly smaller than a semitone", including "the tiny enharmonic melodic intervals of ancient Greece, the several divisions of the octave into more than 12 parts, and various discrepancies among the intervals of just intonation or between a sharp and its enharmonically paired flat in various forms of mean-tone temperament", as well as the Indian sruti, and small intervals used in Byzantine chant, Arabic music theory from the 10th century onward, and similarly for Persian traditional music and Turkish music and various other Near Eastern musical traditions, [22] but do not actually name the "mathematical" terms schisma, comma, and diaschisma.

"Microtone" is also sometimes used to refer to individual notes, "microtonal pitches" added to and distinct from the familiar twelve notes of the chromatic scale, [23] as "enharmonic microtones", [24] for example.

In English the word "microtonality" is mentioned in 1946 by Rudi Blesh who related it to microtonal inflexions of the so-called "blues scales". [25] In Court B. Cutting's 2019 Microtonal Analysis of “Blues Notes” and the Blues Scale, he states that academic studies of the early blues concur that its pitch scale has within it three microtonal “blue notes” not found in 12 tone equal temperament intonation. [26] It was used still earlier by W. McNaught with reference to developments in "modernism" in a 1939 record review of the Columbia History of Music, Vol. 5. [27] In German the term Mikrotonalität came into use at least by 1958, [28] [29] though "Mikrointervall" is still common today in contexts where very small intervals of early European tradition (diesis, comma, etc.) are described, as e.g. in the new Geschichte der Musiktheorie [30] while "Mikroton" seems to prevail in discussions of the avant-garde music and music of Eastern traditions. [ citation needed ] The term "microinterval" is used alongside "microtone" by American musicologist Margo Schulter in her articles on medieval music. [31] [32]

Microtonal Edit

The term "microtonal music" usually refers to music containing very small intervals but can include any tuning that differs from Western twelve-tone equal temperament. Traditional Indian systems of 22 śruti Indonesian gamelan music Thai, Burmese, and African music, and music using just intonation, meantone temperament or other alternative tunings may be considered microtonal. [33] [22] Microtonal variation of intervals is standard practice in the African-American musical forms of spirituals, blues and jazz. [34]

Many microtonal equal divisions of the octave have been proposed, usually (but not always) in order to achieve approximation to the intervals of just intonation. [33] [22]

Terminology other than "microtonal" has been used or proposed by some theorists and composers. In 1914, A. H. Fox Strangways objected that "'heterotone' would be a better name for śruti than the usual translation 'microtone'". [35] Modern Indian researchers yet write: "microtonal intervals called shrutis". [36] In Germany, Austria, and Czechoslovakia in the 1910s and 1920s the usual term continued to be Viertelton-Musik (quarter tone music [37] [ page needed ] ), and the type of intervallic structure found in such music was called the Vierteltonsystem, [38] [39] which was (in the mentioned region) regarded as the main term for referring to music with microintervals, though as early as 1908 Georg Capellan had qualified his use of "quarter tone" with the alternative term "Bruchtonstufen (Viertel- und Dritteltöne)" (fractional degrees (quarter and third tones)). [40] Despite the inclusion of other fractions of a whole tone, this music continued to be described under the heading "Vierteltonmusik" until at least the 1990s, for example in the twelfth edition of the Riemann Musiklexikon, [41] and in the second edition of the popular Brockhaus Riemann Musiklexikon. [42]

Ivan Wyschnegradsky used the term ultra-chromatic for intervals smaller than the semitone and infra-chromatic for intervals larger than the semitone [43] this same term has been used since 1934 by ethnomusicologist Victor Belaiev (Belyaev) in his studies of Azerbaijan and Turkish traditional music. [44] [45] [46] A similar term, subchromatic, has been used by theorist Marek Žabka. [47] Ivor Darreg proposed [ when? ] [ citation needed ] the term xenharmonic see xenharmonic music. The Austrian composer Franz Richter Herf and the music theorist Rolf Maedel, Herf's colleague at the Salzburg Mozarteum, preferred using the Greek word ekmelic when referring to "all the pitches lying outside the traditional twelve-tone system". [48] Some authors in Russia [49] [50] [51] [52] [53] [54] and some musicology dissertations [55] [56] [57] [58] [59] [60] disseminate the term микрохроматика (microchromatics), coined in the 1970s by Yuri Kholopov, [61] to describe a kind of 'intervallic genus' (интервальный род) for all possible microtonal structures, both ancient (as enharmonic genus—γένος ἐναρμόνιον—of Greeks) and modern (as quarter tone scales of Alois Haba) this generalization term allowed also to avoid derivatives such as микротональность (microtonality, which could be understood in Russian as a sub-tonality, which is subordinate to the dominating tonality, especially in the context of European music of the 19th century) and микротоника (microtonic, "a barely perceptible tonic" see a clarification in Kholopov [2000] [62] ). Another Russian authors use more international adjective 'microtonal' and rendered it in Russian as 'микротоновый', but not 'microtonality' ('микротональность'). [63] [64] [65] [66] However, the terms 'микротональность' [67] and 'микротоника' [68] are also used. Some authors writing in French have adopted the term "micro-intervallique" to describe such music. [69] [70] Italian musicologist Luca Conti dedicated two his monographs to microtonalismo, [71] [72] which is the usual term in Italian, and also in Spanish (e.g., as found in the title of Rué [2000] [73] ). The analogous English form, "microtonalism", is also found occasionally instead of "microtonality", e.g., "At the time when serialism and neoclassicism were still incipient a third movement emerged: microtonalism". [74]

The term "macrotonal" has been used for intervals wider than twelve-tone equal temperament, [75] [ permanent dead link ] [ better source needed ] or where there are "fewer than twelve notes per octave", though "this term is not very satisfactory and is used only because there seems to be no other". [76] The term "macrotonal" has also been used for musical form. [77]

Examples of this can be found in various places, ranging from Claude Debussy's impressionistic harmonies to Aaron Copland's chords of stacked fifths, to John Luther Adams' Clouds of Forgetting, Clouds of Unknowing (1995), which gradually expands stacked-interval chords ranging from minor 2nds to major 7thsm. Louis Andriessen's De Staat (1972–1976) contains a number of "augmented" modes that are based on Greek scales but are asymmetrical to the octave. [78]


A giant 1906 machine, and the Eurorack synth module it inspired

The 200-ton, building-sized Telharmonium original produced some of the first electronic music. But now it’s a compact modern synth module, too.

The Make Noise/Tom Erbe Telharmonic is emblematic perhaps of how synthesizer history now folds in on itself. The module combines analog and digital control and synthesis, and pairs a well-known modular creator with one of recent years’ best known engineers and teachers of digital synthesis. Put those elements together, and you recreate… a giant electro-mechanical instrument patented in 1897, but in a form that has never existed before. That old progression from past to present to future seems so boring now. Instead, we have a wormhole of simultaneous possibilities. You know, in a good way.

But if turn-of-the-last-century pioneering instruments are being made into compact modules, we also need a different kind of history.

Kyiv, Ukraine-based composer/artist Oleg Shpudeiko – aka Heinali – recently wove together a history of the original Telharmonium and the new Telharmonic module. It’s such a lovely read that I felt it shouldn’t live only on The FaceBook. So here it is, preserved for posterity (and, if you like, further comments and thoughts).

Thanks to Oleg for this. -Ed.

Make Noise Telharmonic and electronic music history.

I’ve been considering writing about Make Noise/Tom Erbe Telharmonic for some time now. There’s an abundance of videos covering this module, of course. But regrettably, I couldn’t find any that go beyond technical demonstrations, in order to cover the module’s historical and ideological contexts (except for the original Make Noise demo videos, to a certain extent). In my opinion, those are the very things (apart from the hardware’s great sound) that make it a truly exceptional work of tech art. My text is by no means comprehensive, but I hope to accentuate some of my points of interest.

Telharmonic is an Eurorack synthesizer module, a product of collaboration between Make Noise and Tom Erbe. Make Noise is the modular synth company from the US founded by self-taught electronic musical instrument designer Tony Rolando. Tom Erbe is a University of California Santa Davis (UCSD) computer music professor, and author of the famous Soundhack sound processing software for Mac and PC.

The module is described as a ‘Multi-Voice, Multi-Algorithm synthesizer module named for the music hall considered by some to be the location of the first electronic music concerts.’ So lets start with the name, because it’s by no means neither accidental, nor just a simple homage.

Tadeus Cahill’s Telharmonium, also known as Dynamophone, could be described as the first synthesizer, or at least the first electronic music instrument of big significance. Patented in 1897, the instrument was established in Telharmonic Hall in New York in 1906. The hall was a special concert space with an auditorium on the first floor and a basement fully occupied by instrument’s machinery. (The Mark I weighed 7 tons Mark II and III weighed 200 tons).

Two of the tone rotors of the MkII Telharmonium in the basement of Telhamronic Hall circa 1906. Image from McCLure’s Magazine, 1906.

Performances took place in the hall, with a performer sitting behind an organ-like keyboard manual. Music emenated from loudspeakers and was simultaneously transmitted via telephone wires to subscribers in the city.

Telharmonic Hall, New York City, circa 1906.

As its core, the Telharmonium employs additive synthesis, by the means of dynamo-powered tone wheels — rotors with variably shaped alternators spun in a magnetic field, producing a set of sine waves. (The mechanism later became the basis of Hammond electric organs.)

One of the massive rotors that produced tones via electromagnetic field.

The bottom rotor would produce a fundamental frequency and each other rotor above it would produce a partial.

The MakeNoise reinterpertation of this design subtly alludes to the tonewheel, as could be seen in Cahill’s original patents.

The Telharmonium’s original additive synthesis, with sine wave fundamental and partials, is implemented is the MakeNoise module’s H-voice. As in Cahill’s tonewheels, it’s possible to shape the tone by choosing sine wave partials. However, unlike Telharmonium’s original 8 alternators, the digital H-voice features 24 partials for each of its three voices. Each partial can be brought forward by moving the Centroid knob and then locked in place (so it will continue to sound louder) by pressing the H-lock button.

In the original Telharmoninium, partials were controlled by organ-like stops near the performer’s keyboard.

The Telharmonium’s organ-style keyboard manual and stops.

Three H-voices can be arranged in a major, minor, or diminished chords, with inversions, a fifth, unison, or octave, and microtonal combinations in between. Another parameter that develops the idea further is the Flux knob. In its fully clockwise position, it focuses on a particular partial chosen by the Centroid knob. Moving counterclockwise, it brings forward more of the neighboring partials, until all of them are present in the fully counterclockwise position.

Unfortunately, there are no recordings of the original performances, and I wonder how similar or different they may have sounded from the modern module. The Telharmonium’s tones were described as ‘clear and pure.’ One of the visitors noted the instrument’s ability to synthesize different timbres of musical instruments:

The first impression the music makes upon the listener is its singular difference from any music ever heard before: in the fullness, roundness, completeness, of its tones. And truly it is different and more perfect: but strangely enough, while it possesses ranges of tones all its own, it can be made to imitate closely other musical instruments: the flute, oboe, bugle, French horn and ‘cello best of all, the piano and violin not as yet so perfectly. Ask the players for fife music and they play Dixie for you with the squealing of the pipes deceptively perfect. Indeed, the performer upon this marvelous machine, as I shall explain later, can “build up” any sort of tone he wishes : he can produce the perfect note of the flute or the imperfect note of the piano — though the present machine is not adapted to the production of all sorts of music, as future and more extensive machines may be.

Let’s now move 55 years in the future. It’s 1961, and a young composer named James Tenney produced his first computer music piece ‘Analog #1 (Noise Study)’ inside Bell Labs, using Max Matthew’s Music III sound synthesis software.

The composition was recorded on tape, but the sounds for it were produced on the computer. Noise Study is considered the first recorded ‘serious’ computer music, written by a classically trained composer. In a way, the composition shows John Cage’s influence, in its meditation on listening. Here’s what Tenney wrote about the experience:

My first composition using computer-generated sounds was the piece called Analog #1: Noise Study, completed in December, 1961. The idea for the Noise Study developed in the following way: For several months I had been driving to New York City in the evening, returning to the Labs the next morning by way of the heavily traveled Route 22 and the Holland Tunnel. This circuit was made as often as three times every week, and the drive was always an exhausting, nerve-wracking experience, fast, furious, and “noisy.” The sounds of the traffic — especially in the tunnel — were usually so loud and continuous that, for example, it was impossible to maintain a conversation with a companion. It is an experience that is familiar to many people, of course. But then something else happened, which is perhaps not so familiar to others.

One day I found myself listening to these sounds, instead of trying to ignore them as usual. The activity of listening, attentively, to “non-musical,” environmental sounds was not new to me — my esthetic attitude for several years had been that these were potential musical material — but in this particular context I had not yet done this. When I did, finally, begin to listen, the sounds of the traffic became so interesting that the trip was no longer a thing to be dreaded and gotten through as quickly as possible. From then on, I actually looked forward to it as a source of new perceptual insights.

Gradually, I learned to hear these sounds more acutely, to follow the evolution of single elements within the total sonorous “mass,” to feel, kinesthetically, the characteristic rhythmic articulations of the various elements in combination, etc. Then I began to try to analyze the sounds, aurally, to estimate what their physical properties might be — drawing 5 upon what I already knew of acoustics and the correlation of the physical and the subjective attributes of sound. From this image, then, of traffic noises — and especially those heard in the tunnel, where the overall sonority is richer, denser, and the changes are mostly very gradual — I began to conceive a musical composition that not only used sound elements similar to these, but manifested similarly gradual changes in sonority. I thought also of the sound of the ocean surf — in many ways like tunnel traffic sounds — and some of the qualities of this did ultimately manifest themselves in the Noise Study. I did not want the quasi-periodic nature of the sea sounds in the piece however, and this was carefully avoided in the composition process. Instead, I wanted the aperiodic, “asymmetrical” kind of rhythmic flow that was characteristic of the traffic sounds.

The instrument he designed for the realisation of his composition could produce noise bands with a certain degree of control over their parameters, like, for example, increasing and decreasing their bandwidth. (If you’re interested in the process, you can read about it in detail.)

The Telharmonic N-voice works in a very similar way, employing two band-limited noise sidebands around the central frequency by Tonic and Degree knobs, with a Flux knob controlling the width of the sidebands, resulting in a fluttering, almost sine-like sound in the full clockwise position, and pure white noise in the counterclockwise position.

Let’s now skip 23 years further, to the first commercially available phase modulation digital synthesizers. Basically, phase distortion technique appeared as Casio’s way to circumvent Yamaha’s patented FM (frequency modulation) synthesis. Ed.: Think the Casio CZ series. Good stuff. FM, developed by John Chowning, was capable of extraordinary timbres, but phase distortion was controllable in a unique way by contrast, and produced its own signature sounds. For added confusion, you can technically consider FM ‘phase modulation.’ -PK

To simplify, phase distortion is very similar to FM, though instead of frequency, the phase of the signal is modulated.

The Telharmonic P-voice features 3 phase-locked sine-wave oscillators –two of them are modulators, one is a carrier. By moving the Centroid knob, the frequency ratio is changed. The Flux knob controls the depth of the modulation.

All three Telharmonic voices — H, P and N — can be used simultaneously in any combination, with Centroid and Flux controls affecting the spectral content of the voices, while Degree and Tonic controls affect the voice’s intervals and pitch.

Apart from the main mode of operation described above, Telharmonic has two hidden modes, switched by holding the H-lock button for several seconds.

The first one is the ASR emulation. ASR stands for analogue shift register, which is basically a more complex sample and hold circuit, or, in classical musical terms, a canon generator.

For example, a three-voice ASR would have two inputs and three outputs. The first input takes the signal which is sampled and ‘memorized’ every time it receives a pulse in its second input (clock). The first time it receives the pulse, it outputs the memorized signal from the first output the second time, it outputs the voltage from the second output and memorizes the next voltage and outputs it from the first output. The third time, the first voltage is sent to the third output, the second outputs from the second, and the new (third) voltage is being sampled, stored, and sent through the first output, and so on. In this way, the process generates a simple canon, like, for example, Row your boat.

A simple canon, in score form.

While the exact origins of the first ASR are debatable, the first mass-produced, commercially available ASR module was designed by Serge Tcherepnin, creator of Serge synthesizers in the 70s. Here’s the description of ASR module from Serge’s catalog:

The ANALOG SHIFT REGISTER is a sequential sample and hold module for producing arabesque-like forms in musical space. Whenever pulsed, the previously held voltage is sent down the line to three consecutive outputs to produce the electrical equivalent of a canonic musical structure.

The Telharmonic digital ASR module features three channels, with P, H and N voices available simultaneously, as well as six quantization modes, selectable by Interval knob: suspended chord, major triad, minor triad, octaves and fifths, chromatic, octaves only.

The second Telharmonic hidden mode is the Spiratone, a Shepard tone generator. The Shepard tone, named after Roger Shepard, a cognitive scientist, is an auditory illusion of a tone that continually ascends or descends in pitch, yet never moves away or resolves. It was inspired by two particular compositions, Jean-Claude Risset’s “Computer Suite from Little Boy: Fall” of 1968 and aforementioned James Tenney’s “For Ann (rising)” of 1969.

Pretty much every experience with Telharmonic could become an interaction with some of the most interesting moments and ideas of electronic music history. Cahill’s Telharmonium and additive synthesis, half-forgotten phase modulation synthesis of the 80s, Tenney’s first computer music, Serge’s ASR, Shepard’s tones … all of these are interconnected, all housed in a small, 14hp, 30mm module.

If you have any corrections or additions for this piece, please feel free to contact me.
-Oleg

Ed., indeed, we just delved into rich territory both for this module and sound design generally. We’ll of course revise here and do more on any of these topics, if desired. (I counted at least half a dozen new stories we could write just based on some of the subplots here!) -PK

For more reading on the Telharmonium:

More on James Tenney’s computer music:


Contents

Scales, steps, and intervals Edit

Scales are typically listed from low to high pitch. Most scales are octave-repeating, meaning their pattern of notes is the same in every octave (the Bohlen–Pierce scale is one exception). An octave-repeating scale can be represented as a circular arrangement of pitch classes, ordered by increasing (or decreasing) pitch class. For instance, the increasing C major scale is C–D–E–F–G–A–B–[C], with the bracket indicating that the last note is an octave higher than the first note, and the decreasing C major scale is C–B–A–G–F–E–D–[C], with the bracket indicating an octave lower than the first note in the scale.

The distance between two successive notes in a scale is called a scale step.

The notes of a scale are numbered by their steps from the first degree of the scale. For example, in a C major scale the first note is C, the second D, the third E and so on. Two notes can also be numbered in relation to each other: C and E create an interval of a third (in this case a major third) D and F also create a third (in this case a minor third).

Pitch Edit

A single scale can be manifested at many different pitch levels. For example, a C major scale can be started at C4 (middle C see scientific pitch notation) and ascending an octave to C5 or it could be started at C6, ascending an octave to C7. As long as all the notes can be played, [ clarification needed ] the octave they take on can be altered. [ citation needed ]

Types of scale Edit

Scales may be described according to the number of different pitch classes they contain:

    , or dodecatonic (12 notes per octave) (9 notes per octave): a chromatic variation of the heptatonic blues scale (8 notes per octave): used in jazz and modern classical music (7 notes per octave): the most common modern Western scale (6 notes per octave): common in Western folk music (5 notes per octave): the anhemitonic form (lacking semitones) is common in folk music, especially in Asian music also known as the "black note" scale (4 notes), tritonic (3 notes), and ditonic (2 notes): generally limited to prehistoric ("primitive") music

Scales may also be described by their constituent intervals, such as being hemitonic, cohemitonic, or having imperfections. [3] Many music theorists concur that the constituent intervals of a scale have a large role in the cognitive perception of its sonority, or tonal character.

"The number of the notes that make up a scale as well as the quality of the intervals between successive notes of the scale help to give the music of a culture area its peculiar sound quality." [4] "The pitch distances or intervals among the notes of a scale tell us more about the sound of the music than does the mere number of tones." [5]

Scales may also be described by their symmetry, such as being palindromic, chiral, or having rotational symmetry as in Messiaen's modes of limited transposition.

Harmonic content Edit

The notes of a scale form intervals with each of the other notes of the chord in combination. A 5-note scale has 10 of these harmonic intervals, a 6-note scale has 15, a 7-note scale has 21, an 8-note scale has 28. [6] Though the scale is not a chord, and might never be heard more than one note at a time, still the absence, presence, and placement of certain key intervals plays a large part in the sound of the scale, the natural movement of melody within the scale, and the selection of chords taken naturally from the scale. [6]

A musical scale that contains tritones is called tritonic (though the expression is also used for any scale with just three notes per octave, whether or not it includes a tritone), and one without tritones is atritonic. A scale or chord that contains semitones is called hemitonic, and without semitones is anhemitonic.

Scales in composition Edit

Scales can be abstracted from performance or composition. They are also often used precompositionally to guide or limit a composition. Explicit instruction in scales has been part of compositional training for many centuries. One or more scales may be used in a composition, such as in Claude Debussy's L'Isle Joyeuse. [7] To the right, the first scale is a whole-tone scale, while the second and third scales are diatonic scales. All three are used in the opening pages of Debussy's piece.

Scales in traditional Western music generally consist of seven notes and repeat at the octave. Notes in the commonly used scales (see just below) are separated by whole and half step intervals of tones and semitones. The harmonic minor scale includes a three-semitone step the anhemitonic pentatonic includes two of those and no semitones.

Western music in the Medieval and Renaissance periods (1100–1600) tends to use the white-note diatonic scale C–D–E–F–G–A–B. Accidentals are rare, and somewhat unsystematically used, often to avoid the tritone.

Music of the common practice periods (1600–1900) uses three types of scale:

  • The diatonic scale (seven notes)—this includes the major scale and the natural minor
  • The melodic and harmonic minor scales (seven notes)

These scales are used in all of their transpositions. The music of this period introduces modulation, which involves systematic changes from one scale to another. Modulation occurs in relatively conventionalized ways. For example, major-mode pieces typically begin in a "tonic" diatonic scale and modulate to the "dominant" scale a fifth above.

In the 19th century (to a certain extent), but more in the 20th century, additional types of scales were explored:

  • The chromatic scale (twelve notes)
  • The whole-tone scale (six notes)
  • The pentatonic scale (five notes)
  • The octatonic or diminished scales (eight notes)

A large variety of other scales exists, some of the more common being:

  • The Phrygian dominant scale (a mode of the harmonic minor scale)
  • The Arabic scales
  • The Hungarian minor scale
  • The Byzantine music scales (called echoi)
  • The Persian scale

Scales such as the pentatonic scale may be considered gapped relative to the diatonic scale. An auxiliary scale is a scale other than the primary or original scale. See: modulation (music) and Auxiliary diminished scale.

In many musical circumstances, a specific note of the scale is chosen as the tonic—the central and most stable note of the scale. In Western tonal music, simple songs or pieces typically start and end on the tonic note. Relative to a choice of a certain tonic, the notes of a scale are often labeled with numbers recording how many scale steps above the tonic they are. For example, the notes of the C major scale (C, D, E, F, G, A, B) can be labeled <1, 2, 3, 4, 5, 6, 7>, reflecting the choice of C as tonic. The expression scale degree refers to these numerical labels. Such labeling requires the choice of a "first" note hence scale-degree labels are not intrinsic to the scale itself, but rather to its modes. For example, if we choose A as tonic, then we can label the notes of the C major scale using A = 1, B = 2, C = 3, and so on. When we do so, we create a new scale called the A minor scale. See the musical note article for how the notes are customarily named in different countries.

The scale degrees of a heptatonic (7-note) scale can also be named using the terms tonic, supertonic, mediant, subdominant, dominant, submediant, subtonic. If the subtonic is a semitone away from the tonic, then it is usually called the leading-tone (or leading-note) otherwise the leading-tone refers to the raised subtonic. Also commonly used is the (movable do) solfège naming convention in which each scale degree is denoted by a syllable. In the major scale, the solfège syllables are: do, re, mi, fa, so (or sol), la, ti (or si), do (or ut).

In naming the notes of a scale, it is customary that each scale degree be assigned its own letter name: for example, the A major scale is written A–B–C ♯ –D–E–F ♯ –G ♯ rather than A–B–D ♭ –D–E–E –G ♯ . However, it is impossible to do this in scales that contain more than seven notes, at least in the English-language nomenclature system. [ citation needed ]

Scales may also be identified by using a binary system of twelve zeros or ones to represent each of the twelve notes of a chromatic scale. It is assumed that the scale is tuned using 12-tone equal temperament (so that, for instance, C ♯ is the same as D ♭ ), and that the tonic is in the leftmost position. For example, the binary number 101011010101, equivalent to the decimal number 2773, would represent any major scale (such as C–D–E–F–G–A–B). This system includes scales from 100000000000 (2048) to 111111111111 (4095), providing a total of 2048 possible species, but only 351 unique scales containing from 1 to 12 notes. [8]

Scales may also be shown as semitones from the tonic. For instance, 0 2 4 5 7 9 11 denotes any major scale such as C–D–E–F–G–A–B, in which the first degree is, obviously, 0 semitones from the tonic (and therefore coincides with it), the second is 2 semitones from the tonic, the third is 4 semitones from the tonic, and so on. Again, this implies that the notes are drawn from a chromatic scale tuned with 12-tone equal temperament. For some fretted string instruments, such as the guitar and the bass guitar, scales can be notated in tabulature, an approach which indicates the fret number and string upon which each scale degree is played.

Composers transform musical patterns by moving every note in the pattern by a constant number of scale steps: thus, in the C major scale, the pattern C–D–E might be shifted up, or transposed, a single scale step to become D–E–F. This process is called "scalar transposition" or "shifting to a new key" and can often be found in musical sequences and patterns. (It is D-E-F ♯ in Chromatic transposition). Since the steps of a scale can have various sizes, this process introduces subtle melodic and harmonic variation into the music. In Western tonal music, the simplest and most common type of modulation (or changing keys) is to shift from one major key to another key built on the first key's fifth (or dominant) scale degree. In the key of C major, this would involve moving to the key of G major (which uses an F ♯ ). Composers also often modulate to other related keys. In some Romantic music era pieces and contemporary music, composers modulate to "remote keys" that are not related to or close to the tonic. An example of a remote modulation would be taking a song that begins in C major and modulating (changing keys) to F ♯ major.

Through the introduction of blue notes, jazz and blues employ scale intervals smaller than a semitone. The blue note is an interval that is technically neither major nor minor but "in the middle", giving it a characteristic flavour. A regular piano cannot play blue notes, but with electric guitar, saxophone, trombone and trumpet, performers can "bend" notes a fraction of a tone sharp or flat to create blue notes. For instance, in the key of E, the blue note would be either a note between G and G ♯ or a note moving between both.

In blues, a pentatonic scale is often used. In jazz, many different modes and scales are used, often within the same piece of music. Chromatic scales are common, especially in modern jazz.

Equal temperament Edit

In Western music, scale notes are often separated by equally tempered tones or semitones, creating 12 intervals per octave. Each interval separates two tones the higher tone has an oscillation frequency of a fixed ratio (by a factor equal to the twelfth root of two, or approximately 1.059463) higher than the frequency of the lower one. A scale uses a subset consisting typically of 7 of these 12 as scale steps.

Other Edit

Many other musical traditions use scales that include other intervals. These scales originate within the derivation of the harmonic series. Musical intervals are complementary values of the harmonic overtones series. [9] Many musical scales in the world are based on this system, except most of the musical scales from Indonesia and the Indochina Peninsulae, which are based on inharmonic resonance of the dominant metalophone and xylophone instruments.

Intra-scale intervals Edit

Some scales use a different number of pitches. A common scale in Eastern music is the pentatonic scale, which consists of five notes that span an octave. For example, in the Chinese culture, the pentatonic scale is usually used for folk music and consists of C, D, E, G and A, commonly known as gong, shang, jue, chi and yu. [10] [11]


Summarize Reed's study on reinterpreting mental images and Chambers and Riesberg study of ambiguous figures. How does complexity affect the choice of representational code?

Read: concerned that mental imagery might have some limitations and that language may help us to store visual stimuli on some occasions. Tested people's ability to decide whether a specific visual pattern was a portion of a design that they had seen earlier by presenting a series of paired figures, and then after A brief delay, presenting a second pattern to test people's ability to decide whether a specific visual pattern was a portion of a design they had seen earlier. If people actually do store mental images in their head that correspond to the physical objects that they have seen, they should be able to draw forth that mental image and quickly discover that the second visual pattern is or is not a match. However, across all stimuli participants were only correct 55% of the time, hardly better than chance. Reed argued that people could not have stored visual images for figures like the star of David, given the high error rate on items like this one. Instead he proposed that people sometimes store pictures as descriptions, propositional codes. This research supports the verbal propositional-code approach.

Chambers and Reisberg: asked participants to create a clear mental image of a figure that could both resemble a duck looking one way, or rabbit facing the other. After removing the figure, participants were asked to get a second, different interpretation of that particular figure which none could do. They could not consult a stored mental image. When asked to draw the figure from memory, all were able to supply a second interpretation, suggesting that a strong verbal propositional code can overshadow a relatively weak analogue code. It's often easy to reverse a visual stimulus value were looking at a physical picture that is ambiguous, in contrast, it's usually more difficult to reverse a mental image.

It seems likely that people often use an analogue code when they are thinking about fairly simple figures like the hands of the clock, however may use a propositional code when the figures are more complex.
Verbal labels and a propositional code may be especially helpful if the visual stimulus is complex.


Where Are The Older Adults in Music Psychology Research?

A few years ago, when I was President of the Society for Music Perception and Cognition, one of my duties was to give a Presidential Address at the biennial meeting. Traditionally, the president gives some overview of the Society’s business, advertising upcoming initiatives and the like. And then is given leave to discuss any topic she or he thinks is important to the field. My topic was the relative absence of older adult participants in papers published in our field. That conference was in 2013 and I don’t think much has changed, despite my glitzy slides and dramatic delivery. So here I want to go over some of the points I made then and use this forum as a call to be more age-inclusive in our research area.

First, let me present some data to back up my claim that older adults are not very present in our research literature. I recently performed the following journal searches. Journals of Gerontology B (which specializes in cognitive and perceptual studies) has published 2523 articles in its history. Although no search strategy is perfect, I used the term ‘music’ as an intersecting keyword…and came up with exactly one article, from 1995. Similarly, I examined the APA journal Psychology and Aging. Again I looked the journal’s history of 2514 articles, and also used intersecting keyword term ‘music’. That yielded 12 articles (4 of them mine). How about the reverse search strategy? I examined the publishing history of Music Perception, and intersected that with ‘aging’ as a keyword and came up with a grand total of 5 articles with only 2 of those 5 about healthy aging (1 of them mine).

By any count, that is under-representation, considering that older adults are not a small proportion of the population, particularly so in developed countries, and the trend is continually increasing as people live longer and family sizes shrink. According to the UK Office of National Statistics, the proportion of people 65 and older the UK in 1996 was .15, rising to.17 in 2010, and projected to be .23 by 2035. People 85 and older are projected to constitute .05 of the UK population in 2035. US Census Bureau estimates that by 2060, the number of people 65+ in the US will nearly double from 52 million in 2018 to 95 million in 2060.

Well, perhaps one could argue that studying the musical lives of seniors is not so important because older adults are not musically engaged. I’m sure anyone reading this blog can come up with many many examples of older adults involved in music either professionally (Pablo Casals was conducting in the last year of his life, at age 97), avocationally (think of the age distribution of many a community choir or band) or as devotees (think of the age distribution at a concert featuring genres such as classical, jazz, or classic rock). See below for two graphs of participation rates in singing, and playing a musical instrument in the US (courtesy of National Endowment for the Arts) rates in age groups of 55 to 75 are about the same as 25 to 55 (the rates drop in the oldest cohort, although we may assume physical and cognitive limitations would be a factor).

So we have a mismatch between representation of seniors in our studies, and representation of seniors in the population generally, and in music participation. Although we do not know all the reasons this might occur (it is hard to make conclusions from the absence of evidence), it is of course convenient to study young adults in many situations, such as a lab situated in a typical university. It is also true, albeit somewhat circular reasoning, that we know a lot more about other aspects of young adult functioning, such as cognitive, perceptual, social, emotional, and motor processes, that we use as background and baseline to situate studies using music.

Going above statistical considerations, I’d like to argue that extending the age range in our studies to those 60 and older is important on substantive grounds. Older adults provide an interesting, perhaps paradoxical, set of contrasts to younger adults. On the one hand, we know there are some behavioural and biological downturns in older age. One generalisation is that the speed of neural transmission is slower with age (on average) which has consequences for efficiency of functions such as working memory span, divided attention, and formation of new memories, as well as the obvious effect of slower motor reactions. On the other hand, the older we get, the more experience we accrue. This could take the form of explicit learning and training (for instance, the considerable additional years of formal study and performing an older musician might have over a conservatoire student) but also the knowledge gained by implicit exposure via listening to music on media, concerts, and in participatory settings like religious services. Older nonmusicians have heard much more music in these settings than their younger counterparts and researchers miss the opportunity to examine correlates of such exposure if we limit our population to younger listeners. The relative tradeoffs of these different kinds of ‘advantages’ can be very informative in clarifying many areas of music psychology.

As one example, I’ll cite a study I did some years ago (Halpern, Kwak, Bartlett, & Dowling, 1996). We recruited older and younger musicians and nonmusicians, and gave them a standard probe tone task: after hearing a triad, one of the 12 chromatic notes of the scale was played, and listeners had to rate goodness of fit of the probe tone to the triad context. Internalisation of the tonal hierarchy is reflected in a ratings profile wherein the tonic is rated as the best fit, followed by the other triad notes, the other diatonic notes, and lastly the nondiatonic notes. As is typically found, musicians had a more differentiated profile than nonmusicians.

Interestingly, older adults had as differentiated a profile as younger listeners, and in one sub-analysis, a more differentiated profile. But this age robustness was only evident when string tones, and particularly Shepard tones (which minimise cues to pitch height) were used. If the stimuli were sine waves, where pitch height is very salient, older adults were sometimes ‘captured’ by that dimension and rated probe tones as more similar to the context the closer in frequency it was to the tonic. This might reflect a lesser ability to inhibit an irrelevant dimension, an executive function that is known to be sensitive to age. But another interesting observation, which I’ve replicated many times (as have others, in different domains) was that age and musical background did not interact: the advantage of musicians over nonmusicians was the same in older and younger people. In other words, the cognitive and perceptual advantages conferred by younger age seem to stem from different resource pools (and possibly neurobiological substrates) than those associated with training/propensity.

Another example of where the age variable was valuable comes from a study on memory for melodies. Learning a list of new melodies for even immediate recognition is quite difficult, in comparison to learning a list of words, faces, or objects. In one study (Deffler & Halpern, 2011), we gave older and younger listeners (unselected for musical background) a list of novel tunes, and paired each with a baseline category label (Patriotic), a label plus a neutral fact (‘Played at military exercises’) or an emotional fact (‘Played at military funerals’). We thought that context might help memory, but that did not occur with the younger listeners. However, the older adults showed an interesting pattern: their recognition memory for the tunes declined in the neutral fact condition, but recovered to baseline when the fact was emotional. We interpreted this to mean that the different age groups had employed qualitatively different memory strategies: whereas the neutral fact seemed serve as a distracting condition for the older participants (similar to the probe tone study), the emotional information served as scaffolding at encoding, presumably providing a richer context for retrieval. The young adults were obviously not relying on this kind of encoding strategy.

So this kind of investigation can inform us of how lifespan experience, career choice, and biological aging processes may interact in music processing (and other domains). And such conclusions may usefully inform many other fields. For instance, music and other arts therapists could develop more targeted activities that older adults would profit from. I haven’t yet mentioned adults with cognitive impairments, but it is obvious that a more thorough understanding of normative functioning will be very helpful to understand impaired functioning when it comes to the arts. Another field that could benefit from this research is music education. Although we are accustomed to thinking of only children and young adults taking music lessons, there is increasing interest in encouraging people of all ages to continue, or to take up, training in music or participate in groups such as community choirs. Educators need to know how to tailor their approaches for older adults. Finally, the field of marketing needs to understand how to reach an increasingly aging population. Music is often used in adverts for both for-profit products and public services, to convey both the content and affect. It pays (!) to understand how the audience is hearing the message.


Materials and Methods

Participants

Seventy-three volunteers (49 females and 24 males) participated in the present study (mean age 21.3 years SD = 5.6). Subjects were recruited through advertisement and by word of mouth from Udine University population. None of the participants had professional musical expertise, 65 individuals reported to have some kind of practical and/or theoretic musical knowledge (52 of which at a basic level and 13 at an advanced level), while seven persons had no musical expertise at all. Written informed consent was obtained from each subject prior to participation in the experiment. The study was approved by the Ethics Committee of the University Hospital of Udine and was in accordance with the 1964 Declaration of Helsinki.

Stimuli and Measures

Three different musical stimuli containing glissandi were presented to the participants. The Shepard-Risset glissando was presented both within the aesthetic context of a musical composition and on its own. The musical excerpt of the illusion was taken from Claude Risset's Computer Suite from Little Boy, in which the second movement Fall (duration 2�″) consists entirely of descending endless glissandi and scales (Risset, 1968/1988, track 5). The endless glissandi in this movement are partially superimposed with other shorter glissandi, a discrete Shepard scale and other brief sound events. The version of the Shepard-Risset glissando illusion not integrated in the aesthetic context of a musical composition consisted of a 90 s “raw” Shepard-Risset glissando generated using Matlab 2010b (min. freq. 27.5 Hz, 9 components, 12″/octave). Finally, the stimulus material also included an ordinary non-looping orchestral glissando. For this purpose, we used the opening 78 s of Iannis Xenakis's Metastasis (Xenakis, 1954/2001, track 3, 0�″), which presents a long ascending orchestral glissando. Both musical excerpts were recorded from commercially available CDs. All stimuli were processed using Isotope RX2. Stimulus preparation included, where necessary, cutting and adding linear fade-in and fade-outs (500 ms) at the beginning and end of the excerpt. All auditory stimuli are available in the Supplementary Materials. The auditory stimuli were presented binaurally with AKG headphones (K271 mkII). The experiment was conducted using OpenSesame 2.9 software (Mathôt et al., 2012).

Subjects were asked to rate their emotions experienced during listening to the three musical stimuli on a 15-item, seven-point Likert scale. The adjectives used to describe the emotional response were: happy, tense, amazed, impatient, melancholic, meditative, joyful, agitated, nervous, irritated, sad, serene, relaxed, anxious, disturbed. Fourteen out of the fifteen adjectives were chosen from the list of 66 music-relevant emotion terms used in study 3 of Zentner et al. (2008). To these we then added 𠇍isturbed,” previously used in Grewe et al. (2010), as a possible indicator for psychological or physical disruption of equilibrium. All items had a seven-point response scale, with 0 indicating “not at all” and seven “very much.”

For each musical stimulus, subjects were also asked to indicate whether the piece of music had induced a disruption of equilibrium. If so, they were asked whether they also perceived a sensation of falling. Finally, to assess different domains of personality we used the Italian adaptation of the 44-item Big Five Inventory (BFI Ubbiali et al., 2013). All items had a five-point response scale, ranging from 𠇍isagree strongly” to 𠇊gree strongly.” Together with the BFI all the subjects completed a brief questionnaire in which they were asked to report their gender, age, musical expertise, and listening habits.

Procedure

The listening experiments were conducted individually on a computer (Apple iMac 8.1 with Windows XP) with headphones. Prior to beginning the study, all participants received instructions for the experiment. The instructions emphasized that answers to the questionnaires should only concern subjectively felt emotions, not the emotion expressed by the piece of music.

Participants were asked to listen attentively to the stimuli and keep their eyes closed during the presentation. Immediately after each stimulus ended, the Likert scale for emotion rating was presented on the screen. The scale was followed by the yes/no questions regarding disruption of equilibrium and the sensation of falling. Gravito-inertial disorientation is known to generate strong emotional reactions, such as fear and anxiety, because it presents a potential threat to the organism (Balaban and Thayer, 2001 Mast et al., 2014). As a result, disequilibrium is a highly disturbing experience that is unlikely to be forgotten in the short time it takes to fill out the emotion ratings.

Finally, participants were also asked to indicate whether they had liked the piece they had just heard.

After completing the listening experiment, subjects filled out the BFI. Ten individuals chose not to compile the BFI therefore, our sample for all analyses concerning the personality test consisted of 63 subjects.

Data Analysis

The data were analyzed with Statistica 8 (StatSoft, Inc, Tulsa, OK). The matrix of the emotional response data obtained with the emotion ratings showed that 1.9% of the cells were left empty globally for the three stimuli. For data analysis, these empty cells were replaced with the mean values of the relevant item on the scale. The first aim of the present study was to investigate emotional responses elicited by the endless glissando illusion, presented both inside and outside of an aesthetic context (Risset Computer suite and Matlab Shepard-Risset Glissando, respectively), and by an ordinary glissando (Xenakis's Metastasis). The musical stimuli used in this experiment, albeit similar, differ from one another in a number of ways: while the Matlab glissando is unadorned and descending, the descending Risset Computer Suite glissando is partially superimposed with other shorter glissandi and sound events. Furthermore, contrary to the endless glissandi, the Xenakis glissando is both non-looping and ascending. In light of these differences we chose to analyze each of the stimuli separately.

First, repeated-measure ANOVAs separately tested the emotional response to each musical stimulus. The ANOVAs included Emotional Valence at two levels (Positive, Negative) as within-subject factor. Emotion categories were divided into positively and negatively valenced states following the partitioning generally found in emotion research (Russell, 1980 Trost et al., 2012). Separately for each musical stimulus, the data for positive valence emotions were obtained by averaging emotion ratings of the Likert scale for happy, meditative, joyful, serene, amazed, and relaxed emotions. Data for negative valence emotions were given by averaging emotion ratings of the Likert scale for tense, impatient, melancholic, agitated, nervous, irritated, sad, anxious, and disturbed emotions.

Next, we sought to find out if the stimuli can also evoke a disruption of equilibrium, and specifically if they can induce a feeling of falling. To this end, we first compared frequencies of disruption of equilibrium experiences, and associated feelings of falling, occurring within the three stimuli. A series of chi-square analyses was performed. Moreover, to assess potential differences in the emotional response given to the three musical stimuli between subjects who had and had not experienced a sensation of equilibrium disturbance, a mixed model ANOVA was carried out separately for each musical stimulus with Equilibrium disturbance (Yes, No) as between-subject factor and Emotional Valence (Positive, Negative) as within-subject variable.

Finally, since previous studies have shown that various personality dimensions can influence the subjective response to music, we tested for possible associations between personality traits and emotional responses and feelings of equilibrium disturbance within the three types of glissandi. First, a series of parametric correlations were run between BFI dimensions and positive and negative emotional valence associated with each of the three musical stimuli. Second, a series of independent-sample t-tests for the five BFI dimensions were performed in order to investigate whether there were personality differences between the subjects who experienced a sensation of disturbance of equilibrium vs. the individuals who did not have such an experience while listening to the three musical stimuli.

A 0.05 significance threshold was used in all statistical tests. In all ANOVAs, significant interactions were followed-up with Bonferroni's post hoc tests. In the analyses, effect sizes are reported as partial eta squared (ηp 2 ). Effect sizes for the chi-square tests are reported as Φ.


A giant 1906 machine, and the Eurorack synth module it inspired

The 200-ton, building-sized Telharmonium original produced some of the first electronic music. But now it’s a compact modern synth module, too.

The Make Noise/Tom Erbe Telharmonic is emblematic perhaps of how synthesizer history now folds in on itself. The module combines analog and digital control and synthesis, and pairs a well-known modular creator with one of recent years’ best known engineers and teachers of digital synthesis. Put those elements together, and you recreate… a giant electro-mechanical instrument patented in 1897, but in a form that has never existed before. That old progression from past to present to future seems so boring now. Instead, we have a wormhole of simultaneous possibilities. You know, in a good way.

But if turn-of-the-last-century pioneering instruments are being made into compact modules, we also need a different kind of history.

Kyiv, Ukraine-based composer/artist Oleg Shpudeiko – aka Heinali – recently wove together a history of the original Telharmonium and the new Telharmonic module. It’s such a lovely read that I felt it shouldn’t live only on The FaceBook. So here it is, preserved for posterity (and, if you like, further comments and thoughts).

Thanks to Oleg for this. -Ed.

Make Noise Telharmonic and electronic music history.

I’ve been considering writing about Make Noise/Tom Erbe Telharmonic for some time now. There’s an abundance of videos covering this module, of course. But regrettably, I couldn’t find any that go beyond technical demonstrations, in order to cover the module’s historical and ideological contexts (except for the original Make Noise demo videos, to a certain extent). In my opinion, those are the very things (apart from the hardware’s great sound) that make it a truly exceptional work of tech art. My text is by no means comprehensive, but I hope to accentuate some of my points of interest.

Telharmonic is an Eurorack synthesizer module, a product of collaboration between Make Noise and Tom Erbe. Make Noise is the modular synth company from the US founded by self-taught electronic musical instrument designer Tony Rolando. Tom Erbe is a University of California Santa Davis (UCSD) computer music professor, and author of the famous Soundhack sound processing software for Mac and PC.

The module is described as a ‘Multi-Voice, Multi-Algorithm synthesizer module named for the music hall considered by some to be the location of the first electronic music concerts.’ So lets start with the name, because it’s by no means neither accidental, nor just a simple homage.

Tadeus Cahill’s Telharmonium, also known as Dynamophone, could be described as the first synthesizer, or at least the first electronic music instrument of big significance. Patented in 1897, the instrument was established in Telharmonic Hall in New York in 1906. The hall was a special concert space with an auditorium on the first floor and a basement fully occupied by instrument’s machinery. (The Mark I weighed 7 tons Mark II and III weighed 200 tons).

Two of the tone rotors of the MkII Telharmonium in the basement of Telhamronic Hall circa 1906. Image from McCLure’s Magazine, 1906.

Performances took place in the hall, with a performer sitting behind an organ-like keyboard manual. Music emenated from loudspeakers and was simultaneously transmitted via telephone wires to subscribers in the city.

Telharmonic Hall, New York City, circa 1906.

As its core, the Telharmonium employs additive synthesis, by the means of dynamo-powered tone wheels — rotors with variably shaped alternators spun in a magnetic field, producing a set of sine waves. (The mechanism later became the basis of Hammond electric organs.)

One of the massive rotors that produced tones via electromagnetic field.

The bottom rotor would produce a fundamental frequency and each other rotor above it would produce a partial.

The MakeNoise reinterpertation of this design subtly alludes to the tonewheel, as could be seen in Cahill’s original patents.

The Telharmonium’s original additive synthesis, with sine wave fundamental and partials, is implemented is the MakeNoise module’s H-voice. As in Cahill’s tonewheels, it’s possible to shape the tone by choosing sine wave partials. However, unlike Telharmonium’s original 8 alternators, the digital H-voice features 24 partials for each of its three voices. Each partial can be brought forward by moving the Centroid knob and then locked in place (so it will continue to sound louder) by pressing the H-lock button.

In the original Telharmoninium, partials were controlled by organ-like stops near the performer’s keyboard.

The Telharmonium’s organ-style keyboard manual and stops.

Three H-voices can be arranged in a major, minor, or diminished chords, with inversions, a fifth, unison, or octave, and microtonal combinations in between. Another parameter that develops the idea further is the Flux knob. In its fully clockwise position, it focuses on a particular partial chosen by the Centroid knob. Moving counterclockwise, it brings forward more of the neighboring partials, until all of them are present in the fully counterclockwise position.

Unfortunately, there are no recordings of the original performances, and I wonder how similar or different they may have sounded from the modern module. The Telharmonium’s tones were described as ‘clear and pure.’ One of the visitors noted the instrument’s ability to synthesize different timbres of musical instruments:

The first impression the music makes upon the listener is its singular difference from any music ever heard before: in the fullness, roundness, completeness, of its tones. And truly it is different and more perfect: but strangely enough, while it possesses ranges of tones all its own, it can be made to imitate closely other musical instruments: the flute, oboe, bugle, French horn and ‘cello best of all, the piano and violin not as yet so perfectly. Ask the players for fife music and they play Dixie for you with the squealing of the pipes deceptively perfect. Indeed, the performer upon this marvelous machine, as I shall explain later, can “build up” any sort of tone he wishes : he can produce the perfect note of the flute or the imperfect note of the piano — though the present machine is not adapted to the production of all sorts of music, as future and more extensive machines may be.

Let’s now move 55 years in the future. It’s 1961, and a young composer named James Tenney produced his first computer music piece ‘Analog #1 (Noise Study)’ inside Bell Labs, using Max Matthew’s Music III sound synthesis software.

The composition was recorded on tape, but the sounds for it were produced on the computer. Noise Study is considered the first recorded ‘serious’ computer music, written by a classically trained composer. In a way, the composition shows John Cage’s influence, in its meditation on listening. Here’s what Tenney wrote about the experience:

My first composition using computer-generated sounds was the piece called Analog #1: Noise Study, completed in December, 1961. The idea for the Noise Study developed in the following way: For several months I had been driving to New York City in the evening, returning to the Labs the next morning by way of the heavily traveled Route 22 and the Holland Tunnel. This circuit was made as often as three times every week, and the drive was always an exhausting, nerve-wracking experience, fast, furious, and “noisy.” The sounds of the traffic — especially in the tunnel — were usually so loud and continuous that, for example, it was impossible to maintain a conversation with a companion. It is an experience that is familiar to many people, of course. But then something else happened, which is perhaps not so familiar to others.

One day I found myself listening to these sounds, instead of trying to ignore them as usual. The activity of listening, attentively, to “non-musical,” environmental sounds was not new to me — my esthetic attitude for several years had been that these were potential musical material — but in this particular context I had not yet done this. When I did, finally, begin to listen, the sounds of the traffic became so interesting that the trip was no longer a thing to be dreaded and gotten through as quickly as possible. From then on, I actually looked forward to it as a source of new perceptual insights.

Gradually, I learned to hear these sounds more acutely, to follow the evolution of single elements within the total sonorous “mass,” to feel, kinesthetically, the characteristic rhythmic articulations of the various elements in combination, etc. Then I began to try to analyze the sounds, aurally, to estimate what their physical properties might be — drawing 5 upon what I already knew of acoustics and the correlation of the physical and the subjective attributes of sound. From this image, then, of traffic noises — and especially those heard in the tunnel, where the overall sonority is richer, denser, and the changes are mostly very gradual — I began to conceive a musical composition that not only used sound elements similar to these, but manifested similarly gradual changes in sonority. I thought also of the sound of the ocean surf — in many ways like tunnel traffic sounds — and some of the qualities of this did ultimately manifest themselves in the Noise Study. I did not want the quasi-periodic nature of the sea sounds in the piece however, and this was carefully avoided in the composition process. Instead, I wanted the aperiodic, “asymmetrical” kind of rhythmic flow that was characteristic of the traffic sounds.

The instrument he designed for the realisation of his composition could produce noise bands with a certain degree of control over their parameters, like, for example, increasing and decreasing their bandwidth. (If you’re interested in the process, you can read about it in detail.)

The Telharmonic N-voice works in a very similar way, employing two band-limited noise sidebands around the central frequency by Tonic and Degree knobs, with a Flux knob controlling the width of the sidebands, resulting in a fluttering, almost sine-like sound in the full clockwise position, and pure white noise in the counterclockwise position.

Let’s now skip 23 years further, to the first commercially available phase modulation digital synthesizers. Basically, phase distortion technique appeared as Casio’s way to circumvent Yamaha’s patented FM (frequency modulation) synthesis. Ed.: Think the Casio CZ series. Good stuff. FM, developed by John Chowning, was capable of extraordinary timbres, but phase distortion was controllable in a unique way by contrast, and produced its own signature sounds. For added confusion, you can technically consider FM ‘phase modulation.’ -PK

To simplify, phase distortion is very similar to FM, though instead of frequency, the phase of the signal is modulated.

The Telharmonic P-voice features 3 phase-locked sine-wave oscillators –two of them are modulators, one is a carrier. By moving the Centroid knob, the frequency ratio is changed. The Flux knob controls the depth of the modulation.

All three Telharmonic voices — H, P and N — can be used simultaneously in any combination, with Centroid and Flux controls affecting the spectral content of the voices, while Degree and Tonic controls affect the voice’s intervals and pitch.

Apart from the main mode of operation described above, Telharmonic has two hidden modes, switched by holding the H-lock button for several seconds.

The first one is the ASR emulation. ASR stands for analogue shift register, which is basically a more complex sample and hold circuit, or, in classical musical terms, a canon generator.

For example, a three-voice ASR would have two inputs and three outputs. The first input takes the signal which is sampled and ‘memorized’ every time it receives a pulse in its second input (clock). The first time it receives the pulse, it outputs the memorized signal from the first output the second time, it outputs the voltage from the second output and memorizes the next voltage and outputs it from the first output. The third time, the first voltage is sent to the third output, the second outputs from the second, and the new (third) voltage is being sampled, stored, and sent through the first output, and so on. In this way, the process generates a simple canon, like, for example, Row your boat.

A simple canon, in score form.

While the exact origins of the first ASR are debatable, the first mass-produced, commercially available ASR module was designed by Serge Tcherepnin, creator of Serge synthesizers in the 70s. Here’s the description of ASR module from Serge’s catalog:

The ANALOG SHIFT REGISTER is a sequential sample and hold module for producing arabesque-like forms in musical space. Whenever pulsed, the previously held voltage is sent down the line to three consecutive outputs to produce the electrical equivalent of a canonic musical structure.

The Telharmonic digital ASR module features three channels, with P, H and N voices available simultaneously, as well as six quantization modes, selectable by Interval knob: suspended chord, major triad, minor triad, octaves and fifths, chromatic, octaves only.

The second Telharmonic hidden mode is the Spiratone, a Shepard tone generator. The Shepard tone, named after Roger Shepard, a cognitive scientist, is an auditory illusion of a tone that continually ascends or descends in pitch, yet never moves away or resolves. It was inspired by two particular compositions, Jean-Claude Risset’s “Computer Suite from Little Boy: Fall” of 1968 and aforementioned James Tenney’s “For Ann (rising)” of 1969.

Pretty much every experience with Telharmonic could become an interaction with some of the most interesting moments and ideas of electronic music history. Cahill’s Telharmonium and additive synthesis, half-forgotten phase modulation synthesis of the 80s, Tenney’s first computer music, Serge’s ASR, Shepard’s tones … all of these are interconnected, all housed in a small, 14hp, 30mm module.

If you have any corrections or additions for this piece, please feel free to contact me.
-Oleg

Ed., indeed, we just delved into rich territory both for this module and sound design generally. We’ll of course revise here and do more on any of these topics, if desired. (I counted at least half a dozen new stories we could write just based on some of the subplots here!) -PK

For more reading on the Telharmonium:

More on James Tenney’s computer music:


Contents

Microtone Edit

Microtonal music can refer to any music containing microtones. The words "microtone" and "microtonal" were coined before 1912 by Maud MacCarthy Mann in order to avoid the misnomer "quarter tone" when speaking of the srutis of Indian music. [3] Prior to this time the term "quarter tone" was used, confusingly, not only for an interval actually half the size of a semitone, but also for all intervals (considerably) smaller than a semitone. [4] [5] It may have been even slightly earlier, perhaps as early as 1895, that the Mexican composer Julián Carrillo, writing in Spanish or French, coined the terms microtono/micro-ton and microtonalismo/micro-tonalité. [6]

In French, the usual term is the somewhat more self-explanatory micro-intervalle, and French sources give the equivalent German and English terms as Mikrointervall (or Kleinintervall) and micro interval (or microtone), respectively. [7] [8] [9] [10] "Microinterval" is a frequent alternative in English, especially in translations of writings by French authors and in discussion of music by French composers. [11] [12] [13] In English, the two terms "microtone" and "microinterval" are synonymous. [14] The English analogue of the related French term, micro-intervalité, however, is rare or nonexistent, normally being translated as "microtonality" in French, the terms micro-ton, microtonal (or micro-tonal), and microtonalité are also sometimes used, occasionally mixed in the same passage with micro-intervale and micro-intervalité. [6] [15] [16] [17]

Ezra Sims, in the article "Microtone" in the second edition of the Harvard Dictionary of Music defines "microtone" as "an interval smaller than a semitone", [18] which corresponds with Aristoxenus's use of the term diesis. [19] However, the unsigned article "Comma, Schisma" in the same reference source calls comma, schisma and diaschisma "microintervals" but not "microtones", [20] and in the fourth edition of the same reference (which retains Sims's article on "Microtone") a new "Comma, Schisma" article by André Barbera calls them simply "intervals". [21] In the second edition of The New Grove Dictionary of Music and Musicians, Paul Griffiths, Mark Lindley, and Ioannis Zannos define "microtone" as a musical rather than an acoustical entity: "any musical interval or difference of pitch distinctly smaller than a semitone", including "the tiny enharmonic melodic intervals of ancient Greece, the several divisions of the octave into more than 12 parts, and various discrepancies among the intervals of just intonation or between a sharp and its enharmonically paired flat in various forms of mean-tone temperament", as well as the Indian sruti, and small intervals used in Byzantine chant, Arabic music theory from the 10th century onward, and similarly for Persian traditional music and Turkish music and various other Near Eastern musical traditions, [22] but do not actually name the "mathematical" terms schisma, comma, and diaschisma.

"Microtone" is also sometimes used to refer to individual notes, "microtonal pitches" added to and distinct from the familiar twelve notes of the chromatic scale, [23] as "enharmonic microtones", [24] for example.

In English the word "microtonality" is mentioned in 1946 by Rudi Blesh who related it to microtonal inflexions of the so-called "blues scales". [25] In Court B. Cutting's 2019 Microtonal Analysis of “Blues Notes” and the Blues Scale, he states that academic studies of the early blues concur that its pitch scale has within it three microtonal “blue notes” not found in 12 tone equal temperament intonation. [26] It was used still earlier by W. McNaught with reference to developments in "modernism" in a 1939 record review of the Columbia History of Music, Vol. 5. [27] In German the term Mikrotonalität came into use at least by 1958, [28] [29] though "Mikrointervall" is still common today in contexts where very small intervals of early European tradition (diesis, comma, etc.) are described, as e.g. in the new Geschichte der Musiktheorie [30] while "Mikroton" seems to prevail in discussions of the avant-garde music and music of Eastern traditions. [ citation needed ] The term "microinterval" is used alongside "microtone" by American musicologist Margo Schulter in her articles on medieval music. [31] [32]

Microtonal Edit

The term "microtonal music" usually refers to music containing very small intervals but can include any tuning that differs from Western twelve-tone equal temperament. Traditional Indian systems of 22 śruti Indonesian gamelan music Thai, Burmese, and African music, and music using just intonation, meantone temperament or other alternative tunings may be considered microtonal. [33] [22] Microtonal variation of intervals is standard practice in the African-American musical forms of spirituals, blues and jazz. [34]

Many microtonal equal divisions of the octave have been proposed, usually (but not always) in order to achieve approximation to the intervals of just intonation. [33] [22]

Terminology other than "microtonal" has been used or proposed by some theorists and composers. In 1914, A. H. Fox Strangways objected that "'heterotone' would be a better name for śruti than the usual translation 'microtone'". [35] Modern Indian researchers yet write: "microtonal intervals called shrutis". [36] In Germany, Austria, and Czechoslovakia in the 1910s and 1920s the usual term continued to be Viertelton-Musik (quarter tone music [37] [ page needed ] ), and the type of intervallic structure found in such music was called the Vierteltonsystem, [38] [39] which was (in the mentioned region) regarded as the main term for referring to music with microintervals, though as early as 1908 Georg Capellan had qualified his use of "quarter tone" with the alternative term "Bruchtonstufen (Viertel- und Dritteltöne)" (fractional degrees (quarter and third tones)). [40] Despite the inclusion of other fractions of a whole tone, this music continued to be described under the heading "Vierteltonmusik" until at least the 1990s, for example in the twelfth edition of the Riemann Musiklexikon, [41] and in the second edition of the popular Brockhaus Riemann Musiklexikon. [42]

Ivan Wyschnegradsky used the term ultra-chromatic for intervals smaller than the semitone and infra-chromatic for intervals larger than the semitone [43] this same term has been used since 1934 by ethnomusicologist Victor Belaiev (Belyaev) in his studies of Azerbaijan and Turkish traditional music. [44] [45] [46] A similar term, subchromatic, has been used by theorist Marek Žabka. [47] Ivor Darreg proposed [ when? ] [ citation needed ] the term xenharmonic see xenharmonic music. The Austrian composer Franz Richter Herf and the music theorist Rolf Maedel, Herf's colleague at the Salzburg Mozarteum, preferred using the Greek word ekmelic when referring to "all the pitches lying outside the traditional twelve-tone system". [48] Some authors in Russia [49] [50] [51] [52] [53] [54] and some musicology dissertations [55] [56] [57] [58] [59] [60] disseminate the term микрохроматика (microchromatics), coined in the 1970s by Yuri Kholopov, [61] to describe a kind of 'intervallic genus' (интервальный род) for all possible microtonal structures, both ancient (as enharmonic genus—γένος ἐναρμόνιον—of Greeks) and modern (as quarter tone scales of Alois Haba) this generalization term allowed also to avoid derivatives such as микротональность (microtonality, which could be understood in Russian as a sub-tonality, which is subordinate to the dominating tonality, especially in the context of European music of the 19th century) and микротоника (microtonic, "a barely perceptible tonic" see a clarification in Kholopov [2000] [62] ). Another Russian authors use more international adjective 'microtonal' and rendered it in Russian as 'микротоновый', but not 'microtonality' ('микротональность'). [63] [64] [65] [66] However, the terms 'микротональность' [67] and 'микротоника' [68] are also used. Some authors writing in French have adopted the term "micro-intervallique" to describe such music. [69] [70] Italian musicologist Luca Conti dedicated two his monographs to microtonalismo, [71] [72] which is the usual term in Italian, and also in Spanish (e.g., as found in the title of Rué [2000] [73] ). The analogous English form, "microtonalism", is also found occasionally instead of "microtonality", e.g., "At the time when serialism and neoclassicism were still incipient a third movement emerged: microtonalism". [74]

The term "macrotonal" has been used for intervals wider than twelve-tone equal temperament, [75] [ permanent dead link ] [ better source needed ] or where there are "fewer than twelve notes per octave", though "this term is not very satisfactory and is used only because there seems to be no other". [76] The term "macrotonal" has also been used for musical form. [77]

Examples of this can be found in various places, ranging from Claude Debussy's impressionistic harmonies to Aaron Copland's chords of stacked fifths, to John Luther Adams' Clouds of Forgetting, Clouds of Unknowing (1995), which gradually expands stacked-interval chords ranging from minor 2nds to major 7thsm. Louis Andriessen's De Staat (1972–1976) contains a number of "augmented" modes that are based on Greek scales but are asymmetrical to the octave. [78]


Where Are The Older Adults in Music Psychology Research?

A few years ago, when I was President of the Society for Music Perception and Cognition, one of my duties was to give a Presidential Address at the biennial meeting. Traditionally, the president gives some overview of the Society’s business, advertising upcoming initiatives and the like. And then is given leave to discuss any topic she or he thinks is important to the field. My topic was the relative absence of older adult participants in papers published in our field. That conference was in 2013 and I don’t think much has changed, despite my glitzy slides and dramatic delivery. So here I want to go over some of the points I made then and use this forum as a call to be more age-inclusive in our research area.

First, let me present some data to back up my claim that older adults are not very present in our research literature. I recently performed the following journal searches. Journals of Gerontology B (which specializes in cognitive and perceptual studies) has published 2523 articles in its history. Although no search strategy is perfect, I used the term ‘music’ as an intersecting keyword…and came up with exactly one article, from 1995. Similarly, I examined the APA journal Psychology and Aging. Again I looked the journal’s history of 2514 articles, and also used intersecting keyword term ‘music’. That yielded 12 articles (4 of them mine). How about the reverse search strategy? I examined the publishing history of Music Perception, and intersected that with ‘aging’ as a keyword and came up with a grand total of 5 articles with only 2 of those 5 about healthy aging (1 of them mine).

By any count, that is under-representation, considering that older adults are not a small proportion of the population, particularly so in developed countries, and the trend is continually increasing as people live longer and family sizes shrink. According to the UK Office of National Statistics, the proportion of people 65 and older the UK in 1996 was .15, rising to.17 in 2010, and projected to be .23 by 2035. People 85 and older are projected to constitute .05 of the UK population in 2035. US Census Bureau estimates that by 2060, the number of people 65+ in the US will nearly double from 52 million in 2018 to 95 million in 2060.

Well, perhaps one could argue that studying the musical lives of seniors is not so important because older adults are not musically engaged. I’m sure anyone reading this blog can come up with many many examples of older adults involved in music either professionally (Pablo Casals was conducting in the last year of his life, at age 97), avocationally (think of the age distribution of many a community choir or band) or as devotees (think of the age distribution at a concert featuring genres such as classical, jazz, or classic rock). See below for two graphs of participation rates in singing, and playing a musical instrument in the US (courtesy of National Endowment for the Arts) rates in age groups of 55 to 75 are about the same as 25 to 55 (the rates drop in the oldest cohort, although we may assume physical and cognitive limitations would be a factor).

So we have a mismatch between representation of seniors in our studies, and representation of seniors in the population generally, and in music participation. Although we do not know all the reasons this might occur (it is hard to make conclusions from the absence of evidence), it is of course convenient to study young adults in many situations, such as a lab situated in a typical university. It is also true, albeit somewhat circular reasoning, that we know a lot more about other aspects of young adult functioning, such as cognitive, perceptual, social, emotional, and motor processes, that we use as background and baseline to situate studies using music.

Going above statistical considerations, I’d like to argue that extending the age range in our studies to those 60 and older is important on substantive grounds. Older adults provide an interesting, perhaps paradoxical, set of contrasts to younger adults. On the one hand, we know there are some behavioural and biological downturns in older age. One generalisation is that the speed of neural transmission is slower with age (on average) which has consequences for efficiency of functions such as working memory span, divided attention, and formation of new memories, as well as the obvious effect of slower motor reactions. On the other hand, the older we get, the more experience we accrue. This could take the form of explicit learning and training (for instance, the considerable additional years of formal study and performing an older musician might have over a conservatoire student) but also the knowledge gained by implicit exposure via listening to music on media, concerts, and in participatory settings like religious services. Older nonmusicians have heard much more music in these settings than their younger counterparts and researchers miss the opportunity to examine correlates of such exposure if we limit our population to younger listeners. The relative tradeoffs of these different kinds of ‘advantages’ can be very informative in clarifying many areas of music psychology.

As one example, I’ll cite a study I did some years ago (Halpern, Kwak, Bartlett, & Dowling, 1996). We recruited older and younger musicians and nonmusicians, and gave them a standard probe tone task: after hearing a triad, one of the 12 chromatic notes of the scale was played, and listeners had to rate goodness of fit of the probe tone to the triad context. Internalisation of the tonal hierarchy is reflected in a ratings profile wherein the tonic is rated as the best fit, followed by the other triad notes, the other diatonic notes, and lastly the nondiatonic notes. As is typically found, musicians had a more differentiated profile than nonmusicians.

Interestingly, older adults had as differentiated a profile as younger listeners, and in one sub-analysis, a more differentiated profile. But this age robustness was only evident when string tones, and particularly Shepard tones (which minimise cues to pitch height) were used. If the stimuli were sine waves, where pitch height is very salient, older adults were sometimes ‘captured’ by that dimension and rated probe tones as more similar to the context the closer in frequency it was to the tonic. This might reflect a lesser ability to inhibit an irrelevant dimension, an executive function that is known to be sensitive to age. But another interesting observation, which I’ve replicated many times (as have others, in different domains) was that age and musical background did not interact: the advantage of musicians over nonmusicians was the same in older and younger people. In other words, the cognitive and perceptual advantages conferred by younger age seem to stem from different resource pools (and possibly neurobiological substrates) than those associated with training/propensity.

Another example of where the age variable was valuable comes from a study on memory for melodies. Learning a list of new melodies for even immediate recognition is quite difficult, in comparison to learning a list of words, faces, or objects. In one study (Deffler & Halpern, 2011), we gave older and younger listeners (unselected for musical background) a list of novel tunes, and paired each with a baseline category label (Patriotic), a label plus a neutral fact (‘Played at military exercises’) or an emotional fact (‘Played at military funerals’). We thought that context might help memory, but that did not occur with the younger listeners. However, the older adults showed an interesting pattern: their recognition memory for the tunes declined in the neutral fact condition, but recovered to baseline when the fact was emotional. We interpreted this to mean that the different age groups had employed qualitatively different memory strategies: whereas the neutral fact seemed serve as a distracting condition for the older participants (similar to the probe tone study), the emotional information served as scaffolding at encoding, presumably providing a richer context for retrieval. The young adults were obviously not relying on this kind of encoding strategy.

So this kind of investigation can inform us of how lifespan experience, career choice, and biological aging processes may interact in music processing (and other domains). And such conclusions may usefully inform many other fields. For instance, music and other arts therapists could develop more targeted activities that older adults would profit from. I haven’t yet mentioned adults with cognitive impairments, but it is obvious that a more thorough understanding of normative functioning will be very helpful to understand impaired functioning when it comes to the arts. Another field that could benefit from this research is music education. Although we are accustomed to thinking of only children and young adults taking music lessons, there is increasing interest in encouraging people of all ages to continue, or to take up, training in music or participate in groups such as community choirs. Educators need to know how to tailor their approaches for older adults. Finally, the field of marketing needs to understand how to reach an increasingly aging population. Music is often used in adverts for both for-profit products and public services, to convey both the content and affect. It pays (!) to understand how the audience is hearing the message.


Understanding the McGurk Effect: What you see and hear aren't the same thing

Amazingly, the way we visualize the world has an important impact on the things that we hear. This is neatly illustrated by one of the most famous auditory illusions: the McGurk Effect. Named after one of its discoverers (the British psychologist Harry McGurk), the effect is particularly common in conversation. It turns out that the way words are formed by the speaker is as important in how they are perceived as the sound that they make. So, if you shape your mouth in two different ways, but say the same word, viewers may well hear two separate sounds.

Below you can see an example of the McGurk Effect:

It’s hard to recreate this in real life. When you say words, your mouth tends to make very similar shapes, but skilled tricksters can use the McGurk Effect to fool people into hearing something they really can’t. The key to what’s actually happening is inside the hearer’s brain. We don’t just absorb sound waves and turn them instantly into sound, like a radio microphone might do. Instead, our brains anticipate and interpret what they see and hear, taking short cuts to produce the perception of sound. When what we see and hear don’t match up, illusions can result.


Materials and Methods

Participants

Seventy-three volunteers (49 females and 24 males) participated in the present study (mean age 21.3 years SD = 5.6). Subjects were recruited through advertisement and by word of mouth from Udine University population. None of the participants had professional musical expertise, 65 individuals reported to have some kind of practical and/or theoretic musical knowledge (52 of which at a basic level and 13 at an advanced level), while seven persons had no musical expertise at all. Written informed consent was obtained from each subject prior to participation in the experiment. The study was approved by the Ethics Committee of the University Hospital of Udine and was in accordance with the 1964 Declaration of Helsinki.

Stimuli and Measures

Three different musical stimuli containing glissandi were presented to the participants. The Shepard-Risset glissando was presented both within the aesthetic context of a musical composition and on its own. The musical excerpt of the illusion was taken from Claude Risset's Computer Suite from Little Boy, in which the second movement Fall (duration 2�″) consists entirely of descending endless glissandi and scales (Risset, 1968/1988, track 5). The endless glissandi in this movement are partially superimposed with other shorter glissandi, a discrete Shepard scale and other brief sound events. The version of the Shepard-Risset glissando illusion not integrated in the aesthetic context of a musical composition consisted of a 90 s “raw” Shepard-Risset glissando generated using Matlab 2010b (min. freq. 27.5 Hz, 9 components, 12″/octave). Finally, the stimulus material also included an ordinary non-looping orchestral glissando. For this purpose, we used the opening 78 s of Iannis Xenakis's Metastasis (Xenakis, 1954/2001, track 3, 0�″), which presents a long ascending orchestral glissando. Both musical excerpts were recorded from commercially available CDs. All stimuli were processed using Isotope RX2. Stimulus preparation included, where necessary, cutting and adding linear fade-in and fade-outs (500 ms) at the beginning and end of the excerpt. All auditory stimuli are available in the Supplementary Materials. The auditory stimuli were presented binaurally with AKG headphones (K271 mkII). The experiment was conducted using OpenSesame 2.9 software (Mathôt et al., 2012).

Subjects were asked to rate their emotions experienced during listening to the three musical stimuli on a 15-item, seven-point Likert scale. The adjectives used to describe the emotional response were: happy, tense, amazed, impatient, melancholic, meditative, joyful, agitated, nervous, irritated, sad, serene, relaxed, anxious, disturbed. Fourteen out of the fifteen adjectives were chosen from the list of 66 music-relevant emotion terms used in study 3 of Zentner et al. (2008). To these we then added 𠇍isturbed,” previously used in Grewe et al. (2010), as a possible indicator for psychological or physical disruption of equilibrium. All items had a seven-point response scale, with 0 indicating “not at all” and seven “very much.”

For each musical stimulus, subjects were also asked to indicate whether the piece of music had induced a disruption of equilibrium. If so, they were asked whether they also perceived a sensation of falling. Finally, to assess different domains of personality we used the Italian adaptation of the 44-item Big Five Inventory (BFI Ubbiali et al., 2013). All items had a five-point response scale, ranging from 𠇍isagree strongly” to 𠇊gree strongly.” Together with the BFI all the subjects completed a brief questionnaire in which they were asked to report their gender, age, musical expertise, and listening habits.

Procedure

The listening experiments were conducted individually on a computer (Apple iMac 8.1 with Windows XP) with headphones. Prior to beginning the study, all participants received instructions for the experiment. The instructions emphasized that answers to the questionnaires should only concern subjectively felt emotions, not the emotion expressed by the piece of music.

Participants were asked to listen attentively to the stimuli and keep their eyes closed during the presentation. Immediately after each stimulus ended, the Likert scale for emotion rating was presented on the screen. The scale was followed by the yes/no questions regarding disruption of equilibrium and the sensation of falling. Gravito-inertial disorientation is known to generate strong emotional reactions, such as fear and anxiety, because it presents a potential threat to the organism (Balaban and Thayer, 2001 Mast et al., 2014). As a result, disequilibrium is a highly disturbing experience that is unlikely to be forgotten in the short time it takes to fill out the emotion ratings.

Finally, participants were also asked to indicate whether they had liked the piece they had just heard.

After completing the listening experiment, subjects filled out the BFI. Ten individuals chose not to compile the BFI therefore, our sample for all analyses concerning the personality test consisted of 63 subjects.

Data Analysis

The data were analyzed with Statistica 8 (StatSoft, Inc, Tulsa, OK). The matrix of the emotional response data obtained with the emotion ratings showed that 1.9% of the cells were left empty globally for the three stimuli. For data analysis, these empty cells were replaced with the mean values of the relevant item on the scale. The first aim of the present study was to investigate emotional responses elicited by the endless glissando illusion, presented both inside and outside of an aesthetic context (Risset Computer suite and Matlab Shepard-Risset Glissando, respectively), and by an ordinary glissando (Xenakis's Metastasis). The musical stimuli used in this experiment, albeit similar, differ from one another in a number of ways: while the Matlab glissando is unadorned and descending, the descending Risset Computer Suite glissando is partially superimposed with other shorter glissandi and sound events. Furthermore, contrary to the endless glissandi, the Xenakis glissando is both non-looping and ascending. In light of these differences we chose to analyze each of the stimuli separately.

First, repeated-measure ANOVAs separately tested the emotional response to each musical stimulus. The ANOVAs included Emotional Valence at two levels (Positive, Negative) as within-subject factor. Emotion categories were divided into positively and negatively valenced states following the partitioning generally found in emotion research (Russell, 1980 Trost et al., 2012). Separately for each musical stimulus, the data for positive valence emotions were obtained by averaging emotion ratings of the Likert scale for happy, meditative, joyful, serene, amazed, and relaxed emotions. Data for negative valence emotions were given by averaging emotion ratings of the Likert scale for tense, impatient, melancholic, agitated, nervous, irritated, sad, anxious, and disturbed emotions.

Next, we sought to find out if the stimuli can also evoke a disruption of equilibrium, and specifically if they can induce a feeling of falling. To this end, we first compared frequencies of disruption of equilibrium experiences, and associated feelings of falling, occurring within the three stimuli. A series of chi-square analyses was performed. Moreover, to assess potential differences in the emotional response given to the three musical stimuli between subjects who had and had not experienced a sensation of equilibrium disturbance, a mixed model ANOVA was carried out separately for each musical stimulus with Equilibrium disturbance (Yes, No) as between-subject factor and Emotional Valence (Positive, Negative) as within-subject variable.

Finally, since previous studies have shown that various personality dimensions can influence the subjective response to music, we tested for possible associations between personality traits and emotional responses and feelings of equilibrium disturbance within the three types of glissandi. First, a series of parametric correlations were run between BFI dimensions and positive and negative emotional valence associated with each of the three musical stimuli. Second, a series of independent-sample t-tests for the five BFI dimensions were performed in order to investigate whether there were personality differences between the subjects who experienced a sensation of disturbance of equilibrium vs. the individuals who did not have such an experience while listening to the three musical stimuli.

A 0.05 significance threshold was used in all statistical tests. In all ANOVAs, significant interactions were followed-up with Bonferroni's post hoc tests. In the analyses, effect sizes are reported as partial eta squared (ηp 2 ). Effect sizes for the chi-square tests are reported as Φ.


Contents

Scales, steps, and intervals Edit

Scales are typically listed from low to high pitch. Most scales are octave-repeating, meaning their pattern of notes is the same in every octave (the Bohlen–Pierce scale is one exception). An octave-repeating scale can be represented as a circular arrangement of pitch classes, ordered by increasing (or decreasing) pitch class. For instance, the increasing C major scale is C–D–E–F–G–A–B–[C], with the bracket indicating that the last note is an octave higher than the first note, and the decreasing C major scale is C–B–A–G–F–E–D–[C], with the bracket indicating an octave lower than the first note in the scale.

The distance between two successive notes in a scale is called a scale step.

The notes of a scale are numbered by their steps from the first degree of the scale. For example, in a C major scale the first note is C, the second D, the third E and so on. Two notes can also be numbered in relation to each other: C and E create an interval of a third (in this case a major third) D and F also create a third (in this case a minor third).

Pitch Edit

A single scale can be manifested at many different pitch levels. For example, a C major scale can be started at C4 (middle C see scientific pitch notation) and ascending an octave to C5 or it could be started at C6, ascending an octave to C7. As long as all the notes can be played, [ clarification needed ] the octave they take on can be altered. [ citation needed ]

Types of scale Edit

Scales may be described according to the number of different pitch classes they contain:

    , or dodecatonic (12 notes per octave) (9 notes per octave): a chromatic variation of the heptatonic blues scale (8 notes per octave): used in jazz and modern classical music (7 notes per octave): the most common modern Western scale (6 notes per octave): common in Western folk music (5 notes per octave): the anhemitonic form (lacking semitones) is common in folk music, especially in Asian music also known as the "black note" scale (4 notes), tritonic (3 notes), and ditonic (2 notes): generally limited to prehistoric ("primitive") music

Scales may also be described by their constituent intervals, such as being hemitonic, cohemitonic, or having imperfections. [3] Many music theorists concur that the constituent intervals of a scale have a large role in the cognitive perception of its sonority, or tonal character.

"The number of the notes that make up a scale as well as the quality of the intervals between successive notes of the scale help to give the music of a culture area its peculiar sound quality." [4] "The pitch distances or intervals among the notes of a scale tell us more about the sound of the music than does the mere number of tones." [5]

Scales may also be described by their symmetry, such as being palindromic, chiral, or having rotational symmetry as in Messiaen's modes of limited transposition.

Harmonic content Edit

The notes of a scale form intervals with each of the other notes of the chord in combination. A 5-note scale has 10 of these harmonic intervals, a 6-note scale has 15, a 7-note scale has 21, an 8-note scale has 28. [6] Though the scale is not a chord, and might never be heard more than one note at a time, still the absence, presence, and placement of certain key intervals plays a large part in the sound of the scale, the natural movement of melody within the scale, and the selection of chords taken naturally from the scale. [6]

A musical scale that contains tritones is called tritonic (though the expression is also used for any scale with just three notes per octave, whether or not it includes a tritone), and one without tritones is atritonic. A scale or chord that contains semitones is called hemitonic, and without semitones is anhemitonic.

Scales in composition Edit

Scales can be abstracted from performance or composition. They are also often used precompositionally to guide or limit a composition. Explicit instruction in scales has been part of compositional training for many centuries. One or more scales may be used in a composition, such as in Claude Debussy's L'Isle Joyeuse. [7] To the right, the first scale is a whole-tone scale, while the second and third scales are diatonic scales. All three are used in the opening pages of Debussy's piece.

Scales in traditional Western music generally consist of seven notes and repeat at the octave. Notes in the commonly used scales (see just below) are separated by whole and half step intervals of tones and semitones. The harmonic minor scale includes a three-semitone step the anhemitonic pentatonic includes two of those and no semitones.

Western music in the Medieval and Renaissance periods (1100–1600) tends to use the white-note diatonic scale C–D–E–F–G–A–B. Accidentals are rare, and somewhat unsystematically used, often to avoid the tritone.

Music of the common practice periods (1600–1900) uses three types of scale:

  • The diatonic scale (seven notes)—this includes the major scale and the natural minor
  • The melodic and harmonic minor scales (seven notes)

These scales are used in all of their transpositions. The music of this period introduces modulation, which involves systematic changes from one scale to another. Modulation occurs in relatively conventionalized ways. For example, major-mode pieces typically begin in a "tonic" diatonic scale and modulate to the "dominant" scale a fifth above.

In the 19th century (to a certain extent), but more in the 20th century, additional types of scales were explored:

  • The chromatic scale (twelve notes)
  • The whole-tone scale (six notes)
  • The pentatonic scale (five notes)
  • The octatonic or diminished scales (eight notes)

A large variety of other scales exists, some of the more common being:

  • The Phrygian dominant scale (a mode of the harmonic minor scale)
  • The Arabic scales
  • The Hungarian minor scale
  • The Byzantine music scales (called echoi)
  • The Persian scale

Scales such as the pentatonic scale may be considered gapped relative to the diatonic scale. An auxiliary scale is a scale other than the primary or original scale. See: modulation (music) and Auxiliary diminished scale.

In many musical circumstances, a specific note of the scale is chosen as the tonic—the central and most stable note of the scale. In Western tonal music, simple songs or pieces typically start and end on the tonic note. Relative to a choice of a certain tonic, the notes of a scale are often labeled with numbers recording how many scale steps above the tonic they are. For example, the notes of the C major scale (C, D, E, F, G, A, B) can be labeled <1, 2, 3, 4, 5, 6, 7>, reflecting the choice of C as tonic. The expression scale degree refers to these numerical labels. Such labeling requires the choice of a "first" note hence scale-degree labels are not intrinsic to the scale itself, but rather to its modes. For example, if we choose A as tonic, then we can label the notes of the C major scale using A = 1, B = 2, C = 3, and so on. When we do so, we create a new scale called the A minor scale. See the musical note article for how the notes are customarily named in different countries.

The scale degrees of a heptatonic (7-note) scale can also be named using the terms tonic, supertonic, mediant, subdominant, dominant, submediant, subtonic. If the subtonic is a semitone away from the tonic, then it is usually called the leading-tone (or leading-note) otherwise the leading-tone refers to the raised subtonic. Also commonly used is the (movable do) solfège naming convention in which each scale degree is denoted by a syllable. In the major scale, the solfège syllables are: do, re, mi, fa, so (or sol), la, ti (or si), do (or ut).

In naming the notes of a scale, it is customary that each scale degree be assigned its own letter name: for example, the A major scale is written A–B–C ♯ –D–E–F ♯ –G ♯ rather than A–B–D ♭ –D–E–E –G ♯ . However, it is impossible to do this in scales that contain more than seven notes, at least in the English-language nomenclature system. [ citation needed ]

Scales may also be identified by using a binary system of twelve zeros or ones to represent each of the twelve notes of a chromatic scale. It is assumed that the scale is tuned using 12-tone equal temperament (so that, for instance, C ♯ is the same as D ♭ ), and that the tonic is in the leftmost position. For example, the binary number 101011010101, equivalent to the decimal number 2773, would represent any major scale (such as C–D–E–F–G–A–B). This system includes scales from 100000000000 (2048) to 111111111111 (4095), providing a total of 2048 possible species, but only 351 unique scales containing from 1 to 12 notes. [8]

Scales may also be shown as semitones from the tonic. For instance, 0 2 4 5 7 9 11 denotes any major scale such as C–D–E–F–G–A–B, in which the first degree is, obviously, 0 semitones from the tonic (and therefore coincides with it), the second is 2 semitones from the tonic, the third is 4 semitones from the tonic, and so on. Again, this implies that the notes are drawn from a chromatic scale tuned with 12-tone equal temperament. For some fretted string instruments, such as the guitar and the bass guitar, scales can be notated in tabulature, an approach which indicates the fret number and string upon which each scale degree is played.

Composers transform musical patterns by moving every note in the pattern by a constant number of scale steps: thus, in the C major scale, the pattern C–D–E might be shifted up, or transposed, a single scale step to become D–E–F. This process is called "scalar transposition" or "shifting to a new key" and can often be found in musical sequences and patterns. (It is D-E-F ♯ in Chromatic transposition). Since the steps of a scale can have various sizes, this process introduces subtle melodic and harmonic variation into the music. In Western tonal music, the simplest and most common type of modulation (or changing keys) is to shift from one major key to another key built on the first key's fifth (or dominant) scale degree. In the key of C major, this would involve moving to the key of G major (which uses an F ♯ ). Composers also often modulate to other related keys. In some Romantic music era pieces and contemporary music, composers modulate to "remote keys" that are not related to or close to the tonic. An example of a remote modulation would be taking a song that begins in C major and modulating (changing keys) to F ♯ major.

Through the introduction of blue notes, jazz and blues employ scale intervals smaller than a semitone. The blue note is an interval that is technically neither major nor minor but "in the middle", giving it a characteristic flavour. A regular piano cannot play blue notes, but with electric guitar, saxophone, trombone and trumpet, performers can "bend" notes a fraction of a tone sharp or flat to create blue notes. For instance, in the key of E, the blue note would be either a note between G and G ♯ or a note moving between both.

In blues, a pentatonic scale is often used. In jazz, many different modes and scales are used, often within the same piece of music. Chromatic scales are common, especially in modern jazz.

Equal temperament Edit

In Western music, scale notes are often separated by equally tempered tones or semitones, creating 12 intervals per octave. Each interval separates two tones the higher tone has an oscillation frequency of a fixed ratio (by a factor equal to the twelfth root of two, or approximately 1.059463) higher than the frequency of the lower one. A scale uses a subset consisting typically of 7 of these 12 as scale steps.

Other Edit

Many other musical traditions use scales that include other intervals. These scales originate within the derivation of the harmonic series. Musical intervals are complementary values of the harmonic overtones series. [9] Many musical scales in the world are based on this system, except most of the musical scales from Indonesia and the Indochina Peninsulae, which are based on inharmonic resonance of the dominant metalophone and xylophone instruments.

Intra-scale intervals Edit

Some scales use a different number of pitches. A common scale in Eastern music is the pentatonic scale, which consists of five notes that span an octave. For example, in the Chinese culture, the pentatonic scale is usually used for folk music and consists of C, D, E, G and A, commonly known as gong, shang, jue, chi and yu. [10] [11]


Staged Crime Scenes–Literature and Types

Separating Staging Behaviors by Intent

Although these red flags do exist, and help investigators/profilers determine between staged and genuine scenes, it was also found in the author’s research that red flags differ depending on what type of scene the offender is seeking to stage. That is, offenders whose intention was to stage a certain type of scene carried out behaviors that were different to those with other intentions. As such, the importance of separating out each type of staging from the others is highlighted. In that regard, cases may be classified into categories based on the staging behaviors present in the case and the statements the witness made to authorities investigating the death. For example, if the victim’s body is placed at the bottom of the stairs and the witness tells police that he or she saw the victim fall down the stairs, but the medical examiner testifies the victim died as a result of a gunshot, the case is classified as a staged accidental death because it is clear the offender is trying to make the scene present as such. Cases can thus be separated based on the aim of the staging, and each type studied in more depth. The most prevalent categories of staging in the author’s sample were Burglary/Home Invasion, Suicide, Car Accident, Accidental Death, Sexual Homicide, and Self-Defense Homicide. In the descriptive analysis done in the author’s research it was apparent that each type indeed showed differing characteristics, although there was some overlap between them. It is not feasible in this chapter to outline all the red flags and all the differences within and between each type (for such an in-depth discussion see Ferguson, 2010 ). However, it is of import to acknowledge that staging with different goals takes place with different frequencies and, as such, it is necessary to outline how often various styles of crime scenes were staged in this sample.

The most likely type of staging present in this sample was Burglary/Home Invasion (43.3%). The next most frequent types of scene staged were suicide (12.8%), car accident (12.1%), and accidental death (11.3%). Sexual homicides and self-defense homicides were next most frequently staged, although they were not particularly common (5% and 4.3%, respectively). It should also be noted that an approximately equal number of cases (4.3%) had an unknown goal behind the staging, and the staging behaviors may have been carried out nonspecifically or simply to confuse. Drug-related homicides, executions, and stranger attacks were equally unlikely (1.4% each), and frame-ups, natural deaths, hate crimes, and carjacking/robberies were the least likely, each occurring in less than 1% of cases (0.7% each).

Although every type of staging cannot be discussed in detail, it is possible to broadly discriminate between staged legitimate deaths and staged illegitimate deaths as per the author’s study. This discrimination was made based on a Multi-Dimensional Scaling (MDS) technique that was utilized, and indicated that the biggest difference in the actions carried out by offenders at a crime scene depended upon whether they were seeking for the scene to appear as a legitimate death (i.e., where a crime had not occurred) or an illegitimate death (i.e., where a crime had occurred, but they were not personally involved). It is this discriminating factor that made up the final basic typology of staging:

Staged Legitimate Deaths A.

Staged Illegitimate Deaths

Each type will be discussed in turn, with a case example.


Shepard tone

A Shepard tone, named after Roger Shepard, is a sound consisting of a superposition of sine waves separated by octaves. When played with the bass pitch of the tone moving upward or downward, it is referred to as the Shepard scale. This creates the auditory illusion of a tone that seems to continually ascend or descend in pitch, yet which ultimately gets no higher or lower. [1]


Summarize Reed's study on reinterpreting mental images and Chambers and Riesberg study of ambiguous figures. How does complexity affect the choice of representational code?

Read: concerned that mental imagery might have some limitations and that language may help us to store visual stimuli on some occasions. Tested people's ability to decide whether a specific visual pattern was a portion of a design that they had seen earlier by presenting a series of paired figures, and then after A brief delay, presenting a second pattern to test people's ability to decide whether a specific visual pattern was a portion of a design they had seen earlier. If people actually do store mental images in their head that correspond to the physical objects that they have seen, they should be able to draw forth that mental image and quickly discover that the second visual pattern is or is not a match. However, across all stimuli participants were only correct 55% of the time, hardly better than chance. Reed argued that people could not have stored visual images for figures like the star of David, given the high error rate on items like this one. Instead he proposed that people sometimes store pictures as descriptions, propositional codes. This research supports the verbal propositional-code approach.

Chambers and Reisberg: asked participants to create a clear mental image of a figure that could both resemble a duck looking one way, or rabbit facing the other. After removing the figure, participants were asked to get a second, different interpretation of that particular figure which none could do. They could not consult a stored mental image. When asked to draw the figure from memory, all were able to supply a second interpretation, suggesting that a strong verbal propositional code can overshadow a relatively weak analogue code. It's often easy to reverse a visual stimulus value were looking at a physical picture that is ambiguous, in contrast, it's usually more difficult to reverse a mental image.

It seems likely that people often use an analogue code when they are thinking about fairly simple figures like the hands of the clock, however may use a propositional code when the figures are more complex.
Verbal labels and a propositional code may be especially helpful if the visual stimulus is complex.


Watch the video: The sound illusion that makes Dunkirk so intense (August 2022).