Support Our Community!
We’re proud to keep this forum running as a place for enthusiasts like you to connect, learn, and share. To help cover hosting costs and keep everything running smoothly, we invite you to consider becoming a VIP Member for just $1 per month.

As a thank you, you’ll gain VIP status, enjoy a completely ad-free experience, and know you’re helping sustain the forum for everyone.

Thank you for being a part of our community! ❤️

Become a VIP Member Today!

Up to Part 8 - What's in a MIDI file - more added 27th May '23

If you ever need help with MIDI, this is the place to find it!

Moderators: SysExJohn, parametric, Derek, Saul

User avatar
SysExJohn Ukraine
Global Moderator MIDI Specialist
Global Moderator MIDI Specialist
Posts: 1108
Joined: Tue Dec 02, 2008 12:00 am
16
Where Are You Located?: Wellingborough, UK.

VIP

Up to Part 8 - What's in a MIDI file - more added 27th May '23

Unread post by SysExJohn »

It may seem very strange to start my tutorials section this way but, I didn't learn MIDI intuitavely, I learnt it by reading and examining many professional MIDI files contained on various disks from Yamaha. Also by reading many books and product manuals over the years.
Please note that the words are my own, that I don't quote from books unless I acknowledge the source. I do my best not to plagiarise.

References, acknowledgments and early influences.

Please note:
In writing the following articles numbered from (1) upwards in the MIDI Basics section, I gratefully acknowledge the following sources of information:
N.B. I do not ever quote directly from them, unless acknowledged in my text.
All articles are my own original work although undoubtedly influenced by what I have read and learned from others over the years.

Early references / influences
The manual provided with the Atari ST describing MIDI. (no longer in my possession)
The manual for the Cheetah MS6. (both synth and manual are still in my possession)
Many years of reading Sound-on-Sound magazine. I have their articles on CD from 1998 to 2004 inclusive.
The manual for the Yamaha DB50-XG.
The Beggar's DB50-XG Sys-Ex Guide. (by JRG)
Sequencing XG MIDI files (by the MIDI Muse)
System Exclusive ... A Beginner's Guide (by XSCAPE(UK))

Current references
The Complete MIDI 1.0 Detailed Specification (MMA)
The GM2 MIDI specification by the MMA

The Yamaha XG Specifications up to v2.00
Many Yamaha product manuals, in all too numerous to mention except:
The manuals for the SW1000-XG
The manuals for the MU128.
Manuals for a range of PLG daughter boards. DX, AN, VL, PF & VH.
The manual for XGworks v3

Numerous documents for Roland Sound Canvas especially:
The manual for the Roland SC-8850.

The help files included with SQ01 and SOL2.
The manuals, both in PDF format and on-line for the Garritan VST instrument libraries; GPO4/5, JaBB3, CoMB2, CPO, CFX, IO, WI & Harps.

Influential books: (N.B. not used as references)
The Musician's Guide to MIDI by Christian Braut (very detailed >1,000 pages).
Handbook of MIDI sequencing by Dave Clackett (comprehensive but not too detailed).
The MIDI Files (second edition) by Rob Young (an excellent all rounder with some great tips).
These 3 books are now sadly out of print but copies are still available, both new and 2nd hand, via Amazon or eBay..
-----------------
My earliest MIDI influences and steepest learning curve was developing a utility program in Basic for the Atari ST to enable dump and restore from/to the Cheetah MS6 and programming and adapting MIDI files to use this synth.
That and discussing and swapping ideas with others on the various Bulletin Boards in the late 80's and early 90's, pre-Internet, on how MIDI works.

JohnG. (27th May, 2023)

Other references may occur to me from time to time in which case I'll add them as and when.
Akai EWI4000s; Samson Graphite 49; Casio Privia PX-560; Yamaha AN1x; Novation X-Station 25;
Yamaha UW500; 2 x MU1000 + PLGs AN, DX, VL, PF & VL; VL70m; Roland SC8850; Steinberg UR22mkII;
Motu Micro Lite; E-MU 02 cardbus + 1616m. Finale 26; Sonar 7 PE and CbB; Yamaha XGworks v4; SQ01; SOL2;
Garritan GPO5; COMB2; JABB3; IO; Organs; Steinway; World; Harps; CFX Concert Grand.
User avatar
SysExJohn Ukraine
Global Moderator MIDI Specialist
Global Moderator MIDI Specialist
Posts: 1108
Joined: Tue Dec 02, 2008 12:00 am
16
Where Are You Located?: Wellingborough, UK.

VIP

Part 1 - MIDI Basics.

Unread post by SysExJohn »

An introduction to MIDI.

It struck me recently that, although I had written a basic introduction to MIDI on other forums (fora?), I didn't really have a simple introduction, for beginners, here on my own site. So here goes.

What is MIDI?
Well, the letters stand for Musical Instrument Digital Interface. But you already knew that, didn't you?

Let's just pause a moment and think what that means.
"Musical Instrument." Well, I feel pretty sure that anybody reading this will know what that means.
"Digital Interface." Now we might have hit upon a more problematic area.

The "digital" must mean that it works in a similar way to many of our modern appliances. The CD, the DVD, the smart phone and, most importantly, the home computer, be it a PC or a Mac.

And the "interface" means the mechanism (the plug, socket and cable) by which we can connnect one of these devices to another MIDI device. Although, these days, that might be by a cable we call a USB cable, at the time MIDI was invented USB didn't exist, so a special set of connectors had to be invented, as well as the way MIDI data was sent across that connection.
(More about MIDI data in a little while.)

It's well worth while going back and looking at the history of MIDI, after all it is now more than thirty years ago that the MIDI standards came into being.

A bit of MIDI history.
It was way back in the early eighties that it was seen by one or two visionary people (Dave Smith of Sequential Circuits most notably (think Prophet synthesisers)) that a method of connecting devices together, which was common to all manufacturers, would be a great idea.
In other words "a standard".
Up until that point there were many proprietary systems for interconnection, each one unique to each manufacturer.
That didn't allow, for instance, a Roland keyboard to be connected to a Yamaha sound module.

What was needed was a simple system, that was very low cost to implement, yet robust enough and could transmit a reasonable distance to be used on stage, to be designed for use by all.

The five pin, 180 degree DIN plug already existed and was suitable for the job and cheap enough, so that was chosen.
For many applications transmission in one direction was all that was needed, so a simple three wire system was sufficient for that.
Two wires for the signal and one, the screen, to protect those two from electrical interference.
To send MIDI data from one device to another we connect the MIDI Out connector on one device to the MIDI In connector on the other.

The internal electrical circuit for transmission and reception was again chosen for simplicity and low cost, yet sufficiently robust to do the job effectively.

Next, the transmission speed was considered.
These days, we're so used to speeds up in the mega or even giga bits per second, that we don't perhaps consider what could be done back in the early eighties.

We forget that the IBM PC had, only recently, been introduced.
It used an Intel 8088 processor and five and a quarter inch floppy disks (they really were floppy in those days).
Hard disks didn't yet exist that could fit within the cabinet of a PC.
It ran at just a few megahertz, compared to the gigahertz of a modern computer.
That's a thousand times slower.

Inside a computer of those days there was a timer that ran at 1 MHz. (A million ticks per second.)
Timing for all the connected devices was taken from that "clock".
Try dividing a million by two a few times, and you'll end up with 31,250.
The electronic chip chosen to do the transmission could work successfully at that speed, and it also meant that, with a good quality cable, the MIDI data could be sent at distances of up to about 50 feet or 15 metres.

So far so good(?).

MIDI data.
Now that we've decided how, physically, we're going to connect our devices together, we need to decide just what it is we want to send.
Starting from basics, when we press a key on a keyboard, we want to tell the other device what key we have pressed, C, A, whatever.
When we release the key we need to send that information too.

So we need to have, at the very minimum, two message types, let's call them Note On and Note Off.
We need to have a way of defining what note we pressed or released, that's the Note Number. Middle C is note number 60.
And one more thing, we need to define how hard we played the note, quietly or loudly, that's called Note Velocity.
(There is more, but we'll come to that later.)

The question is, using a digital system (that is, bits and Bytes,) how many Bytes do we need for each message?
We need one Byte to define the message type (Note On or Note Off), one Byte to define the Note Number, and one for the Note Velocity.
So each Note message can be transmitted using just three Bytes.

A Byte is capable of holding 128 different values for note.
A full size piano has 88 notes.
So a byte allows us to define notes more than an octave below and above what a piano is capable of playing.
It should be enough.

A byte is capable of defining 127 different values for Note Velocity. (Zero means no sound.)
Is that enough?
Well, when we look at e.g. an orchestral score, we normally find that just eight dynamic levels are defined.
From fff (forte fortissimo or very, very loud) through ff, f, mf (mezzo forte), mp, p (piano or quiet), pp and ppp.
(Actually, fff and ppp are seldom used.)
So 127 values allows 16 variations of each dynamic level. 16 variations of forte, 16 of piano, and so on.
It should be sufficient.

Across the MIDI interface, running at 31,250 bps, each three Byte message can be transmitted in just a thousandth of a second.
Is that quick enough that we won't notice the tiny lag in time?
Yes, it is. Most people will only notice a time lag as it approaches a hundredth of a second.

Summary.
So now we know about the interface itself, and the basic structure and speed of MIDI data via that interface.
We have 5 pin DIN connectors labelled MIDI In and Out.
Data travels from Out to In at 31,250 bits per second.
MIDI messages are (typically) just three Bytes in length and take a thousandth of a second to travel.
MIDI messages about notes contain the message type, On or Off, the note number, and the loudness of the playing of the note.

Questions.
Q. What happens when we play a chord of, say, three notes.
A. Three consecutive Note On messages are sent maybe C on, E on and G on.
And when we release, three consecutive Note Off messages.
Q. Does the fact that they don't get sent simultaneously matter?
A. Usually not, because actually, there is often a slight gap in how we play them, because of the different lengths of our fingers.
The difference of a thousandth of a second or so is usually unnoticeable.
AND, MIDI has a clever trick.
If the message type (Note On) of the second message is the same as the first, then there's no need to send the message type again.
We just send the note data and the note velocity, so we save one byte, and again for the third note.

So we get Note On C, E, G, rather than Note On C, Note On E, Note On G. Clever eh?

Controlling the sound.
Not only do we want to send "note data", we may also want to send control data.
Just what is meant by "Control" data?
Many instruments, including a piano, have pedals to control the way it sounds, often referred to as loud and soft pedals.
We might have similar pedals connected to our electronic keyboard, or one to control volume, sometimes referred to as a "swell" pedal, as on a church organ.
Many modern instruments have two rotary controls, or a joystick built in, to the left of the keys.
These can be used to change the sound in some way, often bringing in vibrato, or altering the pitch of the note being played, Pitch Bend.

These are also sent in just three bytes.
One to define the message type (a control message), the second to define which control it is, and the third to define the "value" of the control.

As we press the pedal a "pedal on" message is sent, and a "pedal off" when we release it.
With a volume pedal, a whole series of messages may be sent, one after another, to mimic the current pedal position.

The Pitch Bend control has a message type all of its own.
Because human hearing is very sensitive to changes in pitch, we need more than just 128 numbers to represent the change in pitch.
So we use one byte for the message type and two bytes for the value. (Still just three bytes in total.)
This gives us NOT 128 plus 128 values, but 128 times 128 or, more than 16 thousand values for pitch.
A very sensitive control.

Many modern keyboards have dozens of rotary knobs, faders and buttons, and as we move these so some sort of controller message may be transmitted from the keyboard to a MIDI attached sound module.

So, there are many more controller messages, but I'm not going to detail them all here, it's the gist of what is happening, as we play our keyboards, that I want you to grasp.

But a few final controls just to finish off.
Clearly, at the beginning of a song we need to tell the MIDI device what instrument sound is required, a piano or a saxophone or whatever.
In order to do this a "Program Change" message needs to be sent right at the start.
If we send no such message then the default will be a grand piano.

However, gone are the days when a keyboard had just 128 sounds within it.
Nowadays we might have close to a thousand to choose from.

This choice is effected by sending one or more "Bank Select" controller messages before the Program Change message(s) to choose the appropriate variation of sound that we require.
Bank Selects and Program Changes are all three Byte messages too.

A little more data.
Just to confuse matters a little, I haven't told you ALL about what is in a message.
In actual fact we only need half a byte to contain the "Message Type".
The other half is used for something known as the MIDI "Channel".

So what's a channel?
Often, with a keyboard, we can define a keyboard split.
So the lower, left hand side of the keyboard may play the bass and/or the chords with one instrument type, perhaps piano, and the upper, right hand side the melody with a different sound, maybe a flute.

To do this we need to send the note commands to a separate part of the sound module for each instrument type.
So we have, in advance, to send a program change for flute to channel 1, and one for piano to channel 2.

The note on messages for the chords will have channel 2 included in them, the melody notes, channel 1 included.
We can have a maximum of 16 channels per MIDI interface.
The drums (percussion) go on channel 10.

What about MIDI files?
MIDI files clearly record all these messages, PLUS, as it's stored data, we need to add timing information so the device or program playing back the messages knows exactly when to send the messages.

So, in between the messages "Delta Time" is inserted.
Delta time is not like a conventional clock, it is based on the speed and timing of the tune being played, hence its unusual name.
It is the time between the last message and the next one.
It allows the notes to be played back with great accuracy.

Clearly, the device playing the file, often a sequencing program, needs to know the tempo and various other bits of information in order to play back and display notation information accurately, so it will also contain Meta Data, some examples might be the tempo e.g. 110 b.p.m., what key the piece is in e.g. C#, and what time signature e.g. 3/4 time.
And that, together with the note on and off data plus control data constitutes the basic contents of a MIDI file.

Lastly, the standard.
All these things, and many more, had to be written down and agreed between the many musical instrument and computer manufacturers.
An organisation called the MIDI Manufacturers Association, the MMA, was created to do this.
They have a web presence on which much of the technical information can be found.
Warning! Be prepared to be baffled by technology if you visit it.

Hopefully, we now have an overview of what MIDI is.
There is a lot more, but it starts to get complicated!

If it seems complicated at first reading, try reading it again a bit more slowly.
Try a pause between each section.
If there are any questions don't be afraid to ask.

© John L. Garside, 2015.
Akai EWI4000s; Samson Graphite 49; Casio Privia PX-560; Yamaha AN1x; Novation X-Station 25;
Yamaha UW500; 2 x MU1000 + PLGs AN, DX, VL, PF & VL; VL70m; Roland SC8850; Steinberg UR22mkII;
Motu Micro Lite; E-MU 02 cardbus + 1616m. Finale 26; Sonar 7 PE and CbB; Yamaha XGworks v4; SQ01; SOL2;
Garritan GPO5; COMB2; JABB3; IO; Organs; Steinway; World; Harps; CFX Concert Grand.
User avatar
SysExJohn Ukraine
Global Moderator MIDI Specialist
Global Moderator MIDI Specialist
Posts: 1108
Joined: Tue Dec 02, 2008 12:00 am
16
Where Are You Located?: Wellingborough, UK.

VIP

Part 2 - Understanding MIDI

Unread post by SysExJohn »

Article 2. Understanding MIDI.

Welcome to the second of a series of articles on techie topics phrased in a way that novices can understand. Those with a deep technical knowledge may criticise them as an oversimplification, but that's the purpose, in order for those new to the subject to 'get a feeling' of what it's all about. i.e. clarity before precision (to quote a former colleague).

The purpose of this first article is to understand what a MIDI file is. Then I’ll go on to explain wav and mp3 files later. Suggestions for future articles are welcome. e.g. MIDI data, sysex, GS, XG.

1.MIDI
So what is a MIDI file (filename.mid) you may well ask? Put as simply as I can, it is a series of instructions that an electronic keyboard, sound module or computer can understand.

The file contains a series of messages that conform to the MIDI standard.

When one of these devices follows the sequence of instructions in the file it will perform the music using its inbuilt 'voices'. By a “voice” I mean e.g. piano, guitar, trumpet, flute, drums or sound effect etc. that is (or can be) stored within the instrument. So I'll hear music just like a piano with a piano roll player or, more simply, like a music box, except possibly with lots of 'voices' simultaneously.

Let me paint a little scenario. A musician sits down at a piano keyboard. In front of him/her he places a 'piece of music'. He plays the music on the piano and it sounds good. I ask him how did you know which keys to press and he explains that the dots and their different shapes tell him which keys to press and for how long. There are other things on the sheet that explain how loud, how fast, and whether he should be playing black instead of white notes. It’ll also frequently tell him when to get louder or softer, faster or slower and so on.

Some time later I hear him play the same piece but this time on a church organ. It sounds the same but completely different. (Eh!) When I look he's using the same piece of music. So the notes that are being played are the same it’s just the sounds that are different. A little later he invites me over to listen to his 'keyboard' at home. (He's worked out that I'm very interested but completely stupid when it comes to music.) He plays the piece in several different ways and in between each repetition does some obviously very clever things with buttons and switches that change the sound coming out of the electronic keyboard instrument.

Then he presses a button, stands up and walks away from the instrument. Blow me if it doesn't play all that he did, exactly as he did, with all the different instrument sounds playing together, drums, piano, guitar, bass and so on. It even sounds as if he's playing in a large hall. Inevitably I ask him "how?"

"Well the keyboard has something called a sequencer inside it. This sequencer takes all the information I put into it like the voice, for example piano, effects like reverberation, and each note I play, loud or soft etc., and it stores all this information in its memory".

"If I buy a keyboard can I play that music?" I ask. He puts a floppy disk into a slot in his keyboard, presses a button, types in some characters, there's a whirring sound which stops after a few seconds, he takes the floppy out and gives it to me and tells me that all he did is now stored on it in MIDI format. He tells me too that it won't necessarily sound exactly the same, unless I buy the same keyboard as he has, but there will definitely be drums, piano, guitar etc. all played in exactly the same way, probably sounding as though it was played in a large hall.

"Ah! So you've converted the sheet music exactly the way you played it into computer data?" I ask. "Yep! But with all the voice information and effects too."

"So may I just check my understanding with you?" - "Uh Huh."

Q. A MIDI file records all of the notes you play into your keyboard exactly as you play them, not necessarily how they're written on the music sheet?
A. Yes, all your mistakes are faithfully recorded but, and it's a big BUT, you can go back and change them, if they're wrong, just as you can when you make a spelling mistake in a word processed document.

Q. Can I make big changes too?
A. If you want to change the instrumentation, the key it's played in, add another instrumental part, even the whole interpretation. It can all be done. How much depends, to a large extent, upon the capabilities of the keyboard or the computer software you're using and, of course, your knowledge.

Q. It can record other things too?
A. You can tell the sequencer what key you're playing in say "C major". What rhythm for example waltz time 3:4, the song title, what effects to use, for example reverberation, and how much reverb to use for each track. Loads of other things too, even the song lyrics can be typed in.

Q. Sorry, what's a track?
A. Think of a MIDI file as being a bit like a multi-track tape recorder where you can record a different musical instrument or person’s voice on a separate track then mix all the tracks together at different levels to make the final recording. A MIDI file allows you to do the same, so each track has the instrument type, note data, effects etc. for a different instrument. As a minimum, these days there will be 16 tracks, one of which (track 10) is almost always the drum track.

Q. Can I record a real instrument like a trumpet with the MIDI data?
A. No. Not within a standard MIDI file. But there are other file formats that allow a pointer to another kind of file that could have been recorded at the same time via a microphone. The trumpet sound is captured separately and held (digitally) in a file called a wave file (filename.wav). More of this later.

Q. So there is no sound recorded within a MIDI file?
A. No, just a series of instructions (data) telling the instrument what and how to play.

Q. Is there anything else I should know about MIDI.
A. Yea! Lots! But as a beginner probably just the following
1. MIDI stands for Musical Instrument Digital Interface.
2. There is an organisation called the MIDI Manufacturers Association, the MMA, and together with the MIDI Standards Committee they set rules for MIDI.
3. MIDI not only defines the format of a MIDI file but also the messages that go inside it and the electrical plug and socket as well as the electronics that allow us to connect our instruments and computers together.
Perhaps more on this subject later.

The next article will look at wave (.wav) files, what they contain and how they're different from a MIDI file.

© John L. Garside, 2007.
Akai EWI4000s; Samson Graphite 49; Casio Privia PX-560; Yamaha AN1x; Novation X-Station 25;
Yamaha UW500; 2 x MU1000 + PLGs AN, DX, VL, PF & VL; VL70m; Roland SC8850; Steinberg UR22mkII;
Motu Micro Lite; E-MU 02 cardbus + 1616m. Finale 26; Sonar 7 PE and CbB; Yamaha XGworks v4; SQ01; SOL2;
Garritan GPO5; COMB2; JABB3; IO; Organs; Steinway; World; Harps; CFX Concert Grand.
User avatar
SysExJohn Ukraine
Global Moderator MIDI Specialist
Global Moderator MIDI Specialist
Posts: 1108
Joined: Tue Dec 02, 2008 12:00 am
16
Where Are You Located?: Wellingborough, UK.

VIP

Part 3 - What is a wave file?

Unread post by SysExJohn »

Article 3 What is a wave file?

Next in the series of the first three articles, I'm going to describe what a wave file (filename.wav) is and how it differs from a MIDI file. Again I'm going for clarity before precision.

What is a wave file? Again put as simply as I can, a wave file contains a series of numbers (thousands and thousands of numbers) that represent, pretty accurately, a sound, a series of sounds, someone talking, a song, a piece of music, etc. An audio CD contains a number of wave files.

Now, I imagine that doesn't help many of you very much, so let me try and amplify upon it. However, to do so I'm going to have to go into a little bit of technical detail but, I hope, not too much. What I do need to talk about, as simply as I can, is a process known as "sampling".

So when we talk about sampling in the context of audio what do we mean? We mean taking a measurement of the amplitude (electrical loudness) of a waveform (maybe the input from a microphone) at a split second in time and then representing that in a computer (or other digital instrument) as a number. Then taking another measurement a fraction of a second later (turn that into another number) and again and again and so on. If we continue to do this for a minute or so and then we get the computer to write the data it's stored to a file, we have just created a wave file. In the same way we can instruct the computer to sample and save the sound coming out of a sound module or a keyboard. So what the computer has stored is a series of numbers that follows the waveform up and down. If we pass this file to another device it should be able to play it back (convert it from digital back to an analogue wave) and we will hear the original music. What I've just described, in very simple terms, is how an audio CD is made.

We probably don't realize it but we all use this technology every day when we make a telephone call. Somewhere in the circuit, usually at the telephone exchange (or c/o for the Americans amongst us), there's an analogue to digital (A/D) converter or ADC. In this case it samples our voice (the electrical signal coming down the line from the microphone in our telephone) at 8,000 times a second (i.e. the ADC takes 8,000 measurements every second) and uses 1 Byte (8 bits) to store each sample (or measurement). It then sends these Bytes across the network to the exchange (or c/o) nearest to the person we're calling and there the digital to analogue converter (DAC) converts the Bytes back to an analogue signal which it sends down the line to the person we're calling. So our voice is flowing through the telephone network (8 bits x 8,000 samples) at 64,000 bits per second. And it's mono not stereo. This is the international standard for digital telephony.

So far, so good?

For creating a CD clearly there must be some sort of international standard that defines how frequently we must sample the analogue signal and how many Bytes to use to represent the sample that we've taken so that it accurately represents the original sound. For CD quality music we must measure the signal 44,100 times per second and each measurement is stored using 2 Bytes (16 bits) of computer memory. That allows us to divide our original signal into 65,536 different levels. Actually 32,767 different levels +ve and the same number -ve and zero.

Now for some arithmetic.

Sample at 44,100 times per second, each sample is 2 Bytes and this time it's stereo (left and right channels) so x 2. 44,100 x 2 x 2 = 176,400 Bytes per second or x 60 seconds = 10,584,000 Bytes per minute. That's right, ten and a half million Bytes per minute! That means we can store a little more than 70 minutes of high quality audio on a CD with its capacity of over 700 MegaBytes.

Q. So how does a MIDI file differ from a wave file?
A. A MIDI file holds a series of instructions that tell a computer or sound module or keyboard what note to play. How loud, what length, which voice and so on. A wave file, on the other hand, is the sound itself but stored in a digitized format.

Q. Can I convert from one format to the other?
A. Strictly speaking, no, not convert. But I can tell my MIDI file player to play the MIDI file then record the resulting sound output as a wave file and the result should be fine depending upon the quality of the sound module that I use. (I'd call that a process not a conversion but maybe I'm splitting hairs.)

Q. What about the other way?
A. Well, how do I convert the sound from a CD into instructions for a keyboard, especially when there may be lots of instruments playing at the same time? Pretty nearly impossible I'd say. However, my software sequencer has a mechanism called "voice to score". It allows one voice (human or instrument) to be converted to MIDI notes using a microphone input. Using this it is possible, gradually to build up a MIDI file but with quite a bit of human intervention. (Again, I'd call this a process not a conversion.)

Q. You've mentioned two standards for sampling, are there more?
A. Yes, in fact lots. It depends what the medium is that you're sampling for, i.e. telephone, AM radio, FM radio, CD, etc. Two decisions need to be made. The first is what is the frequency range of the audio signal and the second is the dynamic range (the difference between the quietest and the loudest sounds). These two dictate first the sampling rate and second the size (the number of bits or Bytes) of the sample. For the telephone I don't need a wide frequency range nor a huge difference between loud and soft, whereas for a CD I do. Clearly if I'm creating a sound for AM or FM radio system it's somewhere in between.

Q. People mention audio cards and 24/96 in the same breath. What is that?
A. If you're recording sounds that you will eventually mix together to form a CD then the quality of the original needs to be better before you start to process it with digital effects etc. So a high quality audio card of today will sample at 96,000 times per second and use 3 Bytes (24 bits) to store each sample. In the final mix it will be converted down to 16/44 for writing to CD.

If you're feeling very brave you might explore this page:
http://en.wikipedia.org/wiki/Sampling_% ... cessing%29

Looking back over this article I wonder whether it's too much!
Feedback and questions of clarification are always welcome.

© John L. Garside, 2007.


Hi Michael,

Thanks for your enquiry. I thought when I tackled this one it might provoke a few more questions. I'll try to come up with some answers that may make sense to you.

Q. Why is the sample rate of say 44.1k samples per second described as 44.1 Kilohertz--what is the hertz bit?

A. Well, strictly speaking, we shouldn't refer to sampling using the term Hz! The unit of Hertz is really there to define the frequency of wave forms like sound or radio waves etc. Sadly, like many technical terms, it has been picked up and used by people who may not have the scientific or engineering background to understand the precise meaning. Another term, Baud, has suffered the same fate.

But to answer your real question, Hertz is a measurement of frequency i.e. cycles per second. Just like many other measurements it is named after a notable scientist from the past. In this case a German physicist called Heinrich Rudolf Hertz who made important contributions in the field of Electromagnetism. We use amps, for electrical current, named after André-Marie Ampère, this time a French physicist and volts named after an Italian physicist Alessandro Volta. Most of these people are from the 18th or 19th century.
Try this reference: http://en.wikipedia.org/wiki/Sound

Q. Also if using a bit rate of say 16 bits, this seems to be equal to 2 bytes. Thus are there 8 bits in one byte?

A. Yes, quite right, 8 bits to the Byte. I use capital letters to distinguish one from the other when abbreviated. So 8kbps is kilobits per second but 8KB is 8 kilo Bytes (maybe the size of a small MIDI file).
Here's another Reference: http://en.wikipedia.org/wiki/Bytes

Q. Also, I assume the bit rate refers to the accuracy with which each sample is recorded in binary number format (ones and zeroes), thus the higher the bit rate the longer the binary number and the more accurate the sample and the greater the fidelity when converted back to analogue/sound---is this correct?

A. Mmmm. Not quite. First "bit rate" means transmission speed i.e. bits per second. It refers, in this case, to the rate at which data will have to be written to the hard disk. It comes from multiplying the frequency of sampling (samples/sec.) and the size of each sample (1, 2 or 3 Bytes) expressed in bits (8, 16 or 24). So if I sample at 44.1K samples/second and take 2 bytes (16 bits) per sample I get a bit rate of 44,100 x 16 = 705,600 bps (or bits per second). If I increase the size of the sample to 3 bytes then I get 1,0584,000 bps. Similarly increasing or decreasing the frequency of sampling will alter the bit rate up or down.

So really we should stick with samples per second, in this case, because we need to use Hz and KHz to describe the upper and lower limits of the frequency of the sound wave that we are sampling.

Increasing the frequency of sampling allows us to follow the wave more accurately in time, increasing the number of bits per sample allows us to follow the wave more accurately in amplitude. Have a look at the the diagrams on this page:

http://en.wikipedia.org/wiki/Discrete-time.
They show how a wave is converted and the resultant bit map.

Imagine that we increased the number of vertical arrows (in the upper diagram) so that there's one more arrow between each pair of existing arrows. We've doubled the sample rate. The wave has now been represented more accurately in time but we're still only (in this example) using one byte per sample. Now imagine we were to increase the the horizontal dotted lines (on the lower diagram) fitting an extra one between each pair. We've doubled the size of each sample (2 Bytes) and each tiny increment (or decrement) in the wave is more accurately recorded.

The corollary of this is that we can also record higher frequencies by increasing the sample rate and a greater dynamic range by increasing the sample size.

Does this help? I do hope I haven't just confused you more. Please do ask.

Q. Finally, I'm not sure I understand your reference above to +ve and -ve?
A. Sorry, I did wonder whether to spell it in full. +ve is electrical positive and -ve is electrical negative. The terminals on a battery are often marked + and -. Sound waves fluctuate up and down above and below zero volts.

Keep the questions coming,
Best regards,
JohnG.
Akai EWI4000s; Samson Graphite 49; Casio Privia PX-560; Yamaha AN1x; Novation X-Station 25;
Yamaha UW500; 2 x MU1000 + PLGs AN, DX, VL, PF & VL; VL70m; Roland SC8850; Steinberg UR22mkII;
Motu Micro Lite; E-MU 02 cardbus + 1616m. Finale 26; Sonar 7 PE and CbB; Yamaha XGworks v4; SQ01; SOL2;
Garritan GPO5; COMB2; JABB3; IO; Organs; Steinway; World; Harps; CFX Concert Grand.
User avatar
SysExJohn Ukraine
Global Moderator MIDI Specialist
Global Moderator MIDI Specialist
Posts: 1108
Joined: Tue Dec 02, 2008 12:00 am
16
Where Are You Located?: Wellingborough, UK.

VIP

Part 4 - MIDI timing

Unread post by SysExJohn »

Article 4 The MIDI clock.

Before I submit my article on mp3 etc. I thought I'd delve into another area more associated with the name of the forum. What I'm going to do is to try to explain what MIDI clockis, why it's important to us and how it is or can be used. Once again I'll try to keep my explanations as simple as I can even to the point where some precision may be lost.

So what is the MIDI time clock? It's a way of expressing within a MIDI file and in the sequencer we are using, exactly where we are within a piece of music we are playing or preparing.

Firstly we shouldn't confuse the MIDI clock with something called SMPTE (Society of Motion Picture and Television Engineers) timing or the MIDI time code.

What's the difference? Well, MIDI clock is purely focussed upon playing notes at the correct time within any bar (or measure) of music and with their relationship to other notes. SMPTE or MIDI time code, on the other hand, is concerned with starting and stopping (synchronising) the piece of music with relation to a piece of film or television.

How can a layman tell the difference? Quite easily actually, MIDI has 3 sets of numbers seperated (usually) by full stops e.g. 01.01.000 or 04.03.240 whereas SMPTE (Time code) has 4 sets e.g. 00.00.00.00 or 01.03.27.16.

What do the numbers mean? In MIDI they mean, starting at the left, first the measure or bar number, second the beat within the bar and third a subdivision of each beat into a smaller unit usually referred to these days as a tick (previously called a pulse).

These 'ticks' are always subdivisions of a quarter note (or crotchet if you prefer) represented on sheet music as a black dot with a vertical line usually rising from the right edge (sometimes descending from the left edge) and with no curlicue (flag) at the top. A piece of music in 'common time' would often have a 'C' or the numbers '4/4' written at the beginning. i.e. each measure contains four notes, each a quarter note long. More on this later.

SMPTE timing, again from the left, means 1st hours, 2nd minutes, 3rd seconds and 4th the frame number. Films and television have different standards in different countries, so you can have 24, 25, 29.97 or 30 frames per second depending upon where it was created. (Yea, I know, 29.97 frames per second? What kind of standard is that?) So, for most of us, we can ignore SMPTE timing and that's what I'm going to do for the rest of this article.

Back to the MIDI clock and, more specifically, to ticks (or pulses). I'm assuming that we all know that 1/4 notes can be divided in half giving 1/8th notes, 1/8ths can be divided in half giving 1/16ths and so on down to 1/64ths? Yeah?(Actually even 1/128th notes but I've never come across any piece of music with them.)

So, one would think, if we could divide each quarter note by sixteen (i.e. 16 ticks per quarter note (tpqn once ppqn)) then that would be adequate to express our music? Not so, because we've forgotten dotted 1/64th notes. So we need another subdivision, i.e. at least 32 tpqn. Oh-ohh! I've forgotten triplets. So I need to be able to subdivide a 1/64th note by 2 or by 3. (So that makes 96 tpqn (2 x 3 x16)). In fact 96 tpqn should define the worst resolution we should use. There's an argument that we should use a much finer division than this to give our computerised music a much more human feel.

A lot of musicians give expression to their music by playing a note or a drum beat just before (driving) or just after (lazy) the beat. How much sooner or later? Well, you choose, but I'd say just a small fraction. So, let's say a quarter of what we had previously. That makes 384 tpqn (96 x 4). Mmm! That sounds fine enough. But there's another consideration too. Oh no, this has been hard enough!

Many of us are using some kind of module to create the sounds we play along to. Often from Roland or Yamaha but from other manufacturers too. Just before we start a song playing it's a good idea to send a MIDI reset to the module to put all the controllers back to their default positions and this is usually embedded in the MIDI file. With older units this reset can take up to 200 milliseconds to perform (1/5th of a second). So it would be a good idea if we could use the ticks in MIDI timing to get the resets and subsequent set up commands sent in the right order and time so that the module obeys them all correctly. If we use 480 tpqn then each tick takes almost exactly 1 millisecond to perform at 4/4 time and at 120 beats per minute (my sequencer sets this as the default).

When setting up MIDI files for my own use, all the initial stuff like sequence name, beats per minute (120), beats per measure (4/4) etc. is sent in the first 10 ticks i.e. 01.01.000 to 01.01.009 then a MIDI reset at 01.01.010. The next command (for me an XG reset which takes 50 milliseconds to perform) goes at 01.01.240 then subsequent commands like system reverb settings and so on go from 01.01.300 to 01.01.479. The next three beats i.e. 01.02.000 to 01.04.479 are used to send all the channel messages i.e. instrument settings, channel expression, channel volume, pan etc.

It's a good idea to try to make sure that each MIDI command for voice change or anything similar is seperated from the next by at least one but better five ticks. It gives the sound module time to perform it. In the first tick of the second measure the time settings for the song can be placed (02.01.000) so maybe 106 bpm, 3/4 time and the first note if appropriate. For me the music notes always start in measure 2, never earlier. Another benefit is that the music always starts after a 2 second pause, plus any pause in the first measure of the song itself.

If you need another second to pick up your guitar then insert another measure of 2/4 at 120 bpm. If the song is a slow one and the first note is on the last beat of the measure, then don't put the change of tempo (bpm) until the first note. You could even speed up (300 bpm) then slow down again to the required timing e.g. 68 bpm at exactly the right moment.

Well, there you go, my take on MIDI timing. The last part might also be suitable for a suggested section on MIDI hints and tips.

As always, any questions, comments, corrections(?), please fire away.

© John L. Garside, 2007.
Akai EWI4000s; Samson Graphite 49; Casio Privia PX-560; Yamaha AN1x; Novation X-Station 25;
Yamaha UW500; 2 x MU1000 + PLGs AN, DX, VL, PF & VL; VL70m; Roland SC8850; Steinberg UR22mkII;
Motu Micro Lite; E-MU 02 cardbus + 1616m. Finale 26; Sonar 7 PE and CbB; Yamaha XGworks v4; SQ01; SOL2;
Garritan GPO5; COMB2; JABB3; IO; Organs; Steinway; World; Harps; CFX Concert Grand.
User avatar
SysExJohn Ukraine
Global Moderator MIDI Specialist
Global Moderator MIDI Specialist
Posts: 1108
Joined: Tue Dec 02, 2008 12:00 am
16
Where Are You Located?: Wellingborough, UK.

VIP

Part 5 - Audio Compression e.g. mp3

Unread post by SysExJohn »

MP3, MiniDisc, AAC, WMA and all that audio compression stuff.

Having taken a brief look in previous articles at MIDI and wav files, now it's time to take a simple look at some of the mechanisms used to compress audio files that have been digitised and I'll begin with mp3.

As usual I'll start with a simple definition. What is an mp3 file? It's a file containing digitised audio (0s and 1s) that represents something like the original wave file that it was derived from. How closely it represents the original will depend upon three factors, first the bit rate that we decide to encode at, 64kbps (kilo bits per second), 128Kbps or faster (the higher the bit rate the better the quality), second the quality of the piece of software or hardware that is doing the compression and third the complexity of the audio file we are trying to encode.

As a general rule, from my experience, the longer the encoder takes to convert from wav to mp3 the better the resulting sound quality. In other words be patient when converting files. It's a very complex process.

What we end up with is a file (filename.mp3) that can be played on a computer (with the appropriate application) or, more likely, end up on some sort of portable device with a big memory chip or even a tiny hard disk within it and a set of earphones or miniature loudspeakers. It could also be burnt to CD or DVD and played on a player with mp3 capabilities e.g some Sony CD Walkman's etc.

Now for a definition, mp3 stands for MPEG1 (Motion Picture Experts Group) audio layer III. It was originally developed by Frauenhofer IIS. This international standard defines how the sound track of a film is digitally encoded. It has however become a commonly used standard for recording compressed audio for home use.

So, we may start with a wav file, sampled at 44,100 samples per second, with 16 bits per sample in stereo i.e. an audio CD quality track. We then put this file through an encoder (usually software) which splits the incoming audio signal into several frequency bands, analyses each band seperately and throws away the information that the human ear is less likely to be able to hear (psychoacoustic encoding). As it processes the audio it puts little markers in the file (headers) followed by chunks (frames) of encoded audio. These headers tell the mp3 player what to do, i.e. how to process, the frame of data that follows. The software assembles all these frames with their headers into the right sequence and we end up with an mp3 file.

Yup, it throws away a large quantity of the original signal. That's how we get from the original 1,411,200 bits/sec. to only 128,000 bits/sec for example. It is therefore known as a "lossy" encoding technique. This discarded information can never be recovered. So if, at a later stage, we wanted to create a wav file from the mp3 it won't, repeat will not be the same as the original wav file. If we then reprocess that wav file to create another mp3 at a different bit rate the process of throwing away data will be repeated. So once a file has been compressed once using the mp3 process don't reprocess it.

Here are a couple of references for those who want to dig deeper:
http://en.wikipedia.org/wiki/Audio_data_compression and http://en.wikipedia.org/wiki/MP3

Okay. That'll do for mp3. What about the others e.g. MiniDisc? Well, this one is a similar encoding system developed by Sony that uses another psychoacaustic system called ATRAC (Adaptive TRansform Acoustic Coding). It is more modern than mp3 and claims to offer better quality for the same bit rate. The more modern players use a system called ATRAC3. See http://en.wikipedia.org/wiki/Atrac.

We also have AAC (Advanced Audio Coding) a later (better?) encoding method than mp3 also developed by Frauenhofer IIS. Some mp3 players also support this. See http://en.wikipedia.org/wiki/Advanced_Audio_Coding and last but not least, WMA (Windows Media Audio) a later system developed by Microsoft also claiming improved performance over mp3. See http://en.wikipedia.org/wiki/Advanced_Audio_Coding.

To my ears, at least, these later systems generally do sound better. For me music mp3s slower than 128kbps don't sound too good unless it's just a voice soundtrack. For classical music for my own use I encode at a minimum of 256kbps! MiniDisc, to me, sounds pretty reasonable for most things but "Yer pays yer money and yer takes yer choice!"

What about lossless audio compression? Yes, it can be done. Audio, unlike regular data, cannot be massively compressed as there are very few repeating patterns of data within it. (Zip compression relies upon repeating patterns as one of several lossless techniques). I know of three systems, Monkey's Audio, FLAC and Shorten. They are typically only able to compress by a little more than 50%. Many audio engineers use one of these techniques for archiving recordings. I use Monkey's audio. You can find more about it here: http://en.wikipedia.org/wiki/Audio_data_compression.

Well, I hope that's helped some of you. Any questions, fire away.

© John L. Garside, 2007.
Akai EWI4000s; Samson Graphite 49; Casio Privia PX-560; Yamaha AN1x; Novation X-Station 25;
Yamaha UW500; 2 x MU1000 + PLGs AN, DX, VL, PF & VL; VL70m; Roland SC8850; Steinberg UR22mkII;
Motu Micro Lite; E-MU 02 cardbus + 1616m. Finale 26; Sonar 7 PE and CbB; Yamaha XGworks v4; SQ01; SOL2;
Garritan GPO5; COMB2; JABB3; IO; Organs; Steinway; World; Harps; CFX Concert Grand.
User avatar
SysExJohn Ukraine
Global Moderator MIDI Specialist
Global Moderator MIDI Specialist
Posts: 1108
Joined: Tue Dec 02, 2008 12:00 am
16
Where Are You Located?: Wellingborough, UK.

VIP

Part 6 - Tracks & Channels

Unread post by SysExJohn »

Channels, tracks ... what's the difference?

Yea, too right, confusing isn't it? Well not really but then I would say that wouldn't I? What I'm going to do here is to create some example files and post them to the "Members' MIDI files". Look out for files beginning with JGex.

Can I give you a simple explanation of the difference? We use the word channels when we're talking about sending and receiving data using MIDI. We use the word tracks when we're talking (usually) about a software sequencer and separating the data we're creating or editing into manageable parts. So several tracks could end up on one MIDI channel.

Let me put it this way. When you're sending data to a MIDI player, sound module, synthesizer, whatever via a MIDI cable (or USB etc.) then you must instruct the MIDI instrument that is playing the music which channel is expected to play which voice e.g. channel 10 = drums, channel 3 = bass. By default MIDI gives us 16 channels to play on. Some devices (my MU128) can handle multiple sets of 16.

When I'm creating a MIDI file using a software sequencer (I use XGworks) I have 99 tracks to record onto. So, for instance, I could record (or split) all the different sounds I'm going to use for my drum kit on channel 10 (standard MIDI channel for drums) onto separate tracks (within the sequencer) so I can edit them easily. So I might have snare on track 1, hi-hat open on 2, hi-hat closed on 3, hi-hat pedal on 4, crash cymbal on 5, snare tight snappy 6, snare snappy on 7, kick tight 8, kick room 9 (I think you've got the picture by now?). Now whilst I tweak (edit) the velocities (how hard they're hit) and the timing to make them sound just right, drum by drum, I can 'solo' each in turn so I can hear them with and without the rest of the kit. What I will need to do as I move each drum sound to a separate track is to instruct the sequencer to send the data from each of these different tracks to just one MIDI channel, channel 10.

Next I want to record my piano sound but, being a bit of a one handed keyboard player, I want to record the left hand and right hand parts separately. I've decided to use track 1 for right hand and track 2 for the left in the sequencer but, since they will both use the Rhodes piano sound, when being played, I'm going to set up both tracks to transmit (to the sound module) on MIDI channel 1. This is a very common way of splitting a piano or harpsichord part.

So, I've taken a standard MIDI file from a collection, filename = "am-a.zip" (download it from the website in the A section in the usual way). I've stripped off all the instruments apart from the drum kit and the reult is JGex0A01.mid and JGex1A01.mid. The files are in MIDI file format 0 and 1 respectively. You'll see that the data is on track 1 (depends on your sequencer) but it points to MIDI channel 10. I want to edit various drum sounds so I've stripped all the various drums out to different sequencer tracks. But you'll see that every track points to MIDI channel 10. The example is JGex0B01.mid and JGex1B01.mid, again MIDI file formats 0 and 1 respectively. I'll post these example files just as soon as I've located them again!

These example files only show the splitting up of the drum channel not of left and right hand on piano.



© John L. Garside, 2007.
Akai EWI4000s; Samson Graphite 49; Casio Privia PX-560; Yamaha AN1x; Novation X-Station 25;
Yamaha UW500; 2 x MU1000 + PLGs AN, DX, VL, PF & VL; VL70m; Roland SC8850; Steinberg UR22mkII;
Motu Micro Lite; E-MU 02 cardbus + 1616m. Finale 26; Sonar 7 PE and CbB; Yamaha XGworks v4; SQ01; SOL2;
Garritan GPO5; COMB2; JABB3; IO; Organs; Steinway; World; Harps; CFX Concert Grand.
User avatar
SysExJohn Ukraine
Global Moderator MIDI Specialist
Global Moderator MIDI Specialist
Posts: 1108
Joined: Tue Dec 02, 2008 12:00 am
16
Where Are You Located?: Wellingborough, UK.

VIP

Part 7 - Introduction to MIDI messages

Unread post by SysExJohn »

An introduction to MIDI messages.
Before I can begin to answer a couple of the questions that have been asked, it's necessary for me to describe a little more about how MIDI works. I've attempted to do this with my previous article on tracks and channels http://midimart.proboards56.com/index.c ... 1188234603 and now I need to address, briefly, the types of messages that are sent back and forth between MIDI devices.

It really isn't necessary for most gigging musicians to understand the fine detail of all these MIDI messages but it certainly does help if you know a bit more when a file doesn't play properly, you have problems with overall volume, you need to adjust the relative volumes of the different channels or stuck notes at the end. Another frequent problem is when pitch bend or modulation wheels are used and the MIDI file fills up with a huge number of messages that the sequencer or sound module has trouble processing. Knowing how to "thin these out" is well worth while to stop a kind of stuttering occuring.

For instance, weeo recently asked how he could get the drums to play a bit louder in a MIDI file. By understanding a bit more about how a MIDI file is organised and the messages in it, he could quickly tweak the relative volumes of each channel so that he would get the file to play as he wants. This is the introduction to being able to do that editing. (Yes, it is just an intro long winded as it is!)
-----------------------------------------------------------------------------------------
What are MIDI messages? They are short (usually just a few bytes) bursts of data (instructions) that travel up and down the cable between, say, the laptop running a sequencing program and the sound module that is playing the music. It could also be from a keyboard to a sound module or from floppy disk in a MIDI file player to a sound generator. Or any other way of using a Standard MIDI File (SMF). The principle is that someone or something is creating or replaying MIDI messages and some other device is making the sound(s). The generator could be a keyboard, MIDI guitar, wind controller or a sequencer etc. and the device is the piece of hardware or software that converts those instructions into sound. The link could be as short as that between a software sequencer and the sound card inside a PC or as long as a 15 foot MIDI cable. Even longer if you use a wireless MIDI transmission system.

What do I mean by an instruction? Well, I mean a short message (a few bytes of data) that tells the device, which is about to create the sound(s), what do to. These messages are primarily split into two different types, System Messages and Channel Messages.
---------------------------------------------------------------------------------------
System Messages
If you're completely new to MIDI sequencing I shouldn't worry too much about these. Skip straight to the next main section on Channel Messages. (Don't forget to come back though.)
System Messages are split-up into three sub-groups:
System Common, System Real-Time and System Exclusive. What all these messages have in common is that they are focussed upon your complete MIDI system not upon any individual channel. However, I'm only going to deal with System Exclusive messages here (very briefly) as the vast majority of us will not see, nor be aware of the presence of, nor use Common or Real-Time messages. I may come back to these two sometime in the future. Except for one brief mention of a Real-Time Message below.

When I deal with Channel Messages (further below) you will discover that these cover nearly every function that we're routinely likely to need to set up and control the playing of individual channels of music. But some manufacturers wanted to be able to create MIDI devices that went beyond, in some cases way beyond, what was considered everyday usage. And so System Exclusive (SysEx) messages were created.

System Exclusive messages are primarily there to let individual manaufacturers (notably Roland; the GS standard and Yamaha; the XG standard) send control information to their specific devices. These types of manufacturer specific SysEx messages are capable of addressing just one device out of many that may be connected together and can provide a huge range of extra voices and controls that aren't available via normal Channel control. However, (there's always a however isn't there?) they require a level of sophistication in the end user well above that needed for normal or General MIDI. The messages are represented in something that instills fear in the bravest programmer called Hexadecimal. This is a special way of representing the bits stored in a byte. If there's a call for it I may cover this topic in a future tutorial. Otherwise enough about SysEx!

If you have a Roland or Yamaha sound module then at some stage you'll have to get to understand some of the GS and XG SysEx messages if you want to exploit their full capabilities.

However, yes another however but better this time, there are a few Universal SysEx messages that could prove useful to us. Master Volume, Master Pan and Time-Signature which should speak for themselves. Master volume sets the overall output of the sound card, module, software whatever. Master Pan allows you to shift the stereo image from centre stage to left or right should you need to and Time-Signature well you all know this, it's 4/4 or 3/4 or 6/8, i.e. how many quarter notes or eighth notes etc. in a measure (or bar).

The last message I'm going to mention here doesn't really fall into any category as it's neither a System nor a Channel message but it is one of the sequencer messages that needs to occur at the start of a MIDI file and sometimes gets altered as the file plays. It's Tempo. It's one that we need to enter into the sequencer but actually doesn't get sent to the sound module. We need to tell the sequencer how many beats per minute to play but the sequencer then sends a series of pulses (a Timing Clock, an example of a System Real-Time Message) to the sound module at 24 pulses per quarter note.
--------------------------------------------------------------------------------
Phew! That's enough about System Messages. For the time being anyway.
--------------------------------------------------------------------------------
Channel Messages
Okay so far? Probably not. Go and get yourself a cup of tea, coffee or something to refresh you, preferably not alcoholic as you'll need a clear head to absorb this. Then again ... !

What are Channel Messages? They are the messages sent down each of the MIDI channels that we're using (usually 16) that tell that channel which instrument sound to make, how loud, whereabouts in the stereo stage to sound and, of course, what notes to play, as well as a whole shed load of other things.

Channel Messages can be subdivided into five different categories. These are Program Change, Control Change, Note-On/Note-Off, Pitch Bend and Aftertouch and it's these messages that constitute the bulk (especially notes) of a MIDI file.

Notes. Well, at their simplest level they are the messages that tell the sound module which note to play, how hard the note was played and at what precise moment to play it (Note On) and when to switch the sound of that note off (Note Off). What is displayed on the sequencer screen, though, is
time, note number, force, duration. There are, of course, other messages that get sent during the playing of a file and the next most common are likely to be Pitch Bend and Modulation. Pitch bend is what guitarists do to their strings from time to time and modulation is usually (but not always) used to add vibrato to a sounding note.

Setup messages. At the beginning of a MIDI file there will be a set of messages on each channel that tell the channel what sound (Program Change), how loud, pan position, maybe reverb level (Control Change), etc. etc. etc. It may also have lots of other messages that allow the initial sound selected to be modified in many different ways. I imagine we'll be taking a look at some of these things in the near future.

Summary
MIDI Messages are instructions that get sent, for example, between a sequencer and a sound module to tell the module what to do. They start with initialisation messages for the whole system like tempo and key, and then continue with messages that tell the sound module what instrument to play e.g. piano or violin, how loud, what stereo position etc. After this setup sequence the rest of the file is filled with note data and sometimes with instructions to apply pitch bend or vibrato. Extra controller messages are sometimes included from time-to-time to change e.g. tempo or the volume of individual parts.
---------------------------------------------------------------------------
What am I going to put in the user files section? Well, I thought I'd set up what, for me, is a fairly typical initialisation file. I'll make two versions of it, the first for people with a simple GM (General MIDI) compatible sound module and then for one for GM2. So look for JGexGM01.mid, JGexG201.mid respectively all zipped up as JGexGM01.zip. Have a look at what I've done, then during a series of later lessons we'll go through the contents.

For future reference I shall be using Anvil Studio sequencer NOT JazzWare. The reason for this is that I've just discovered that JazzWare edits the timing of some messages without being asked to, as well as adding a whole load of Roland GS SysEx messages into the file, again without being asked, some in measure 11,651. At 120 beats per minute that's a MIDI file more than six and a half hours long! Consequently I shan't be using JazzWare again until it's suitably modified. Anvil Studio is much better behaved. I wondered why some control messages weren't being obeyed properly in my sound module.

But the next lesson will be looking at the various ways of adjusting volume in a MIDI file and specifically at one of weeo's files to get the drums louder. At last!

As always, please feedback comments, questions etc.
JohnG.

© John L. Garside, 2007.
Akai EWI4000s; Samson Graphite 49; Casio Privia PX-560; Yamaha AN1x; Novation X-Station 25;
Yamaha UW500; 2 x MU1000 + PLGs AN, DX, VL, PF & VL; VL70m; Roland SC8850; Steinberg UR22mkII;
Motu Micro Lite; E-MU 02 cardbus + 1616m. Finale 26; Sonar 7 PE and CbB; Yamaha XGworks v4; SQ01; SOL2;
Garritan GPO5; COMB2; JABB3; IO; Organs; Steinway; World; Harps; CFX Concert Grand.
User avatar
SysExJohn Ukraine
Global Moderator MIDI Specialist
Global Moderator MIDI Specialist
Posts: 1108
Joined: Tue Dec 02, 2008 12:00 am
16
Where Are You Located?: Wellingborough, UK.

VIP

Part 8 - What's in a MIDI file

Unread post by SysExJohn »

MIDI files; the set-up measure. (section 1)

Definitions.
Before going into detail about what can be contained within a set-up measure, it's probably important to define exactly what is meant by the name. Set-up in this case implies that within this part of the MIDI file there will be instructions to the sound module (e.g. MU128, SC-8850) or software (MS GS synth, SY50-XG, Roland VSC) that will play the music to tell it exactly how it should set itself up, in order that it plays the notes, in the way that I or you, the programmer, or subsequent re-arranger of the file, want it to be played. A Measure or bar is often, but not always, a set of four quarter-notes that beat Strong, Weak, Less strong, Weak to the count of 1, 2, 3, 4. Those of you who can read from a score won't need this last piece of information.

What's it all about?
We will, within our MIDI file, almost certainly want to define how loud the file is to be played and what the relative volumes are between the different instruments that are playing. We will definitely want to define which instrument will play on each of the 16 MIDI channels (more if you're lucky enough to own a 32 or 64 MIDI channel sound module). If our PA system is stereo then we'll want to pan the instruments around the sound stage. If it's mono we may want to pan the music to the left channel and have a click track, maybe for the drummer, on the right or vice versa. Then feed one to the PA the other to a monitor or to headphones.

We may in addition, if our sound module supports it, want to define the FX used such as reverb type i.e. room, hall, plate etc. If we can do this then we probably will want to define just how much of that reverb will apply to each instrument. There could well be other FX that we want to apply too such as chorus.

But, over and above all these, if a MIDI file has been playing before the current file, then we probably need to make sure that any alteration to the default settings of controllers, made by playing that previous file, is either reset or changed to a more suitable setting.

So where are we going to put these commands that set up not only the overall way the sound module responds but also how each channel works within that overall set-up?

The set-up measure.
There's a good argument that says that there should be one musical measure (or bar) of say, 4/4 time, in which all these initiating commands are programmed into the MIDI file as a set-up sequence, and that should be the very first measure of the file. The musical notes (MIDI note on and note off instructions) will then start in bar two. Why not just spread the commands around in the file where they're needed? Well, yes, that can be done too if you like. But, if you've ever tried to modify a file created in this way, because it doesn't play quite the way you want it to on your system, then you'll realise that finding these set-up instructions within each track of your sequencer can be a nightmare and sometimes, when you make a modification, the most unexpected results can ensue because you've missed something in another place in the file. Hands up those who have experienced this.

A long pause?
Of course having a measure at the beginning with no musical notes in it implies that there will be some seconds of silence before the file begins to play. But with the creative use of the MIDI Tempo that should be limited to about two seconds for even the most complex of set-up measures. And, if we're going to record our output to wav or mini-disk or to an mp3 file, then it's possible to remove that silence fairly simply using one of a variety of hardware or software tools. If, like many professionals, you prefer using the sound-module live, then there's probably even a way around this problem of initial silence (if it is a problem for you) by creating a very short set-up measure, of less than a second, then perhaps having the hi-hat count in (tick, tick, tick, tick) whilst the rest of the set-up is done in parallel.

If even that is unacceptable, then we may just create the set-up measure at the start of the very first song in the set (or even in a separate tiny file, transmitted before the first song) and then put appropriate messages at the end of every file so that each new file starts from square one. It may not be necessary to have a 4/4 bar as the set-up measure. It could be 3/4, 2/4 or even as small as 1/8. Its length will depend upon what you need to incorporate in the set-up measure for the sound module or software that you use.

There are many variations and combinations of different ways of doing this set-up, which, I'm aware, many of our more experienced people here are doing right now. And I imagine that, over time, those people have got used to each file's little idiosyncrasies. But what if there were a better way? I maintain that having the set-up measure first and editing it for each file is still the best way. I figure that most people can cope with a two second pause before the count in or before the song starts.

If you start by creating a MIDI file just one measure long with the set-up that suits your sound-module and then copy the song file that you use to measure two, then edit the instrument set-ups to suit yourself, you're likely to end up with good sounding MIDIs all the time. Now that's got to be worthwhile hasn't it?
-----------------------------------------------------------------
This is the first section of a multi-part article.
The second section will follow shortly.
-----------------------------------------------------------------

MIDI files; the setup measure. (section 2)

The need for a set-up measure
But why do we need to send these commands in the first place? Why don't all MIDI files just work properly?

Well, the fact is, that there's more than one way of setting up a MIDI file and it depends on which sound module, or software, you're using. If you've got an elderly sound card or module it may have been designed (in the early or mid-nineties) using the initial GM (General MIDI) standard. If the device was designed in 1998/9 or later then it may implement the GM2 (General MIDI 2) standard, which will give you more instrument sounds to choose from and some FX. If the device comes from Yamaha then it's likely to implement their XG (eXtended General MIDI) format. If it's from Roland or a recent copy of Windows Media player then it's likely to use the Roland GS standard.

But some sound modules, notably from both Yamaha and Roland, are capable of working in GM, GM2, GS and XG modes, depending what commands they're sent! (Both XG and GS extend a sound module's capabilities way beyond what was envisaged by the MMA, the MIDI Manufacturers Association who control the standards.) So it may be necessary to send a command to make sure the module plays the way the programmer intended, as far as possible, and for you to make the necessary alterations to those set-up instructions so that it plays correctly on your system.

The other issue that we face is that many people who create the MIDI files, that we may subsequently use, are probably first class musicians, but may not know all the technicalities of MIDI. So they may, for example, use a fade out at the end of the file, and leave an instrument's volume at, or close to, zero. They may use pitch bend and leave the control off centre meaning all subsequent notes on that MIDI channel will play out of tune. They may even change the pitch bend range away from its default of two semitones to, say, an octave. Then when you play the next file mayhem ensues!

So, there are numerous controls that can be modified and what we need to do is to make sure that they are all reset to their default settings before the next MIDI file begins to play and, maybe, modify them again.
-----------------------------------------------------------------------------------

Contents of the set-up measure - 1. Meta Data.
So what are we going to put into this set-up measure? Well the very first thing ought to be, what's known technically as, Meta-Data. Meta-data is information that is needed or stored by the sequencer about essentials of how the file is to be played, like beats per bar, tempo etc. It is also information that the programmer may want stored, like copyright data or the naming of tracks and channels or some text data. It is also information that may be displayed in some way like lyrics, sequence name or, if the sequencer can display notation information, then it might include key signature as well as transposition information for various instruments, whether both upper and lower staves are shown or just one or the other etc. This data is read into memory as the file is loaded and usually requires no time to process; it's just stored for the sequencer to use.

Contents of the set-up measure - 2. GM System On.
The next thing that we need in the file is a reset command. Why? Well, if it's not the first file played since the sound-module was switched on, then the previous file will have altered some settings. We could go around issuing set channel volume to 100 16 times, and set panpot to centre 16 times and so on but it's much simpler to insert one command that resets all the commonly used parameters to their default positions. This reset command is known as GM System On. It's important that there is then a short period when no more commands are sent. Why? It can take a GM sound module up to 200 milliseconds (1/5th of a second) to implement this reset, it usually depends just how old the module is. (A response to a poll done by the MMA found that some MIDI devices could take up to 1 to 2 seconds to complete the GM on process!) Please note that there are a few things that the "GM On" message doesn't reset and this is quite deliberate.
-----------------------------------------------------------------------------------------------------

What GM On does
According to the MMA specification this is what a GM On message is supposed to do:
Set channel Volume (controller #7) to 100 (maximum is 127).
Set Expression Pedal (controller #11) to 127 (maximum).
Set channel Pan (controller #10) to 64 (centre).
All actions defined for the channel message Reset All Controllers (see next paragraph) should be implemented.
Those devices that respond to effects controllers #91 to #95, reset them to their defaults.
Any other actions to restore the device to GM power-up state e.g. Bank Select and Program Change to zero.

Many people use another MIDI command called "Reset All Controllers". The trouble is that this command is misnamed and only resets some of the controllers not ALL. (Maybe it should be re-named "reset some controllers"?) As it's a channel message we'll have to issue one to Reset each MIDI channel that could have been altered. (Maybe all 16 channels to be on the safe side?)

What Reset All Controllers does.
Again these are the MMA recommendations:
Set Modulation (controller #1) to zero.
Set Expression (controller #11) to 127 (maximum).
Set Pedals #64 to #67 to zero (minimum or off). (Sustain, Portamento, Sostenuto, Soft pedals.)
Set Registered (RPN) and Non-Registered (NRPN) Parameters LSB and MSB to null (127).
Set Pitch Bend to centre (64).
Reset Channel pressure to zero.
Reset Poly pressure to zero.
All other controllers should be set to zero.
(One of the RPNs controls Pitch Bend sensitivity i.e. the range of the Pitch Bend wheel.)

Note especially that no Bank Select or Program Change messages are altered.
-------------------------------------------------------------------------------------------------

Contents of the set-up measure - 3. GM2, GS or XG System On.
Now, if we are using a GM2 sound-module (from about 1999 on) or one from Roland or Yamaha then, if we intend to use the modules full capabilities i.e. not just use it in GM mode, we almost certainly need to insert another type of reset to get things like FX and so forth back to their defaults. Surprisingly perhaps ;) these are known as GM2 System On, GS System On and amazingly XG System On. We need to insert just one of these, depending on the sound module we are using, after the GM On. 1/5th of a second after. As these sound modules are likely to have been made when electronics were becoming much faster they typically only need around 50 milliseconds (1/20th of a second) for the module to perform the reset.

It is these one (GM) or two (GM2 or GS or XG) System On messages that really should go at the beginning of the MIDI file, if at all possible, definitely before any note data and preferably before any Program Changes (instrument allocations). However, if we add up the two times that they take to implement we're only talking about 250 to 300 milliseconds (a quarter of a second or so) to implement them both. Surely most people can wait that long before the first tick of the count in? Along with these set-up messages, if we choose to use it, should go the command that sets the Master Volume i.e. the overall volume for the entire device. All these messages are what are known as System Exclusive (SysEx) messages and I'll detail their form for you later on the article.

Other system set-up messages.
The next things that must go into this measure, if we are going to use them, are the system set-ups for any sound effects that are to be used, an example, and the most commonly used, is Reverb Type. i.e. what acoustic do we want to surround the MIDI instruments playing. GM2 settings are Small Room, Medium Room, Large Room, Medium Hall, Large Hall and Plate. It's also possible with GM2, with Roland GS and with Yamaha XG compatible devices to define Reverb Time as well as Chorus Type and a several other effect types too. These parameters take very little time to implement (a few milliseconds) and will, unless you are setting-up many different devices simultaneously, take just a very short time to send. These are, pretty well, the only things that it's really essential to put in a set-up measure at the beginning of a MIDI file for the average live gig user.
-----------------------------------------------------------------------------------

Defining the instruments etc.
Once we've set up the sound-module's global parameters then we can start to define, channel by channel, what instrument sound will be used, how loud that instrument will be, whereabouts (left, centre, right) it appears and how it will use any Reverb, Chorus or other sound effect that has been pre-defined. These again could all go in the set-up measure to make them easier to edit later, if necessary. But then again they could go all go at the beginning of the 2nd measure in channel order, or in the order in which the instruments sound, all, that is, except for one.

Defining the Rhythm Channel drum set.
If we are to start the song with a count in measure (1, 2, 3, 4), say using the high-hat, and we intend to use a drum kit other than the standard one defined in the GM MIDI spec. then we need to issue the commands to select that Rhythm kit before we use it. So in this case we should insert those Rhythm Channel set-up commands within the first (set-up) measure too. In order to implement this non-standard Rhythm Channel assignment we have to insert a couple of commands for each channel we are using for drums. N.B. the GM2 spec allows us to use channel 11 as well as the default channel 10 for Rhythm Channels. If we're going to use a 2nd Rhythm Channel then we should, ideally, set it up at the beginning. Again I'll detail these commands later on in the article.

Getting the timing right.
Now, what is important, as we create the first and second measures, is that we insert those instructions (a) in the correct sequence and (b) that there is a sufficient timing gap between each instruction, especially the various System On messages. How do we do that? Well, very often this means that we have to switch into a more detailed view, often called the Event List, insert the commands and/or edit the timing of those commands within the MIDI file. A bit daunting perhaps when we first do it but actually very easy and even a bit boring once we've got used to it. I'll give some examples later.

Problems with timing?
Usually we define the instruments etc. in the main window of our software sequencer but, with many sequencers it will put all these Program Change messages at the same identical moment in the file, in terms of timing, and actually most sound-modules will cope with them without problem. However, when we start to use the more advanced Bank Select messages that allow us to allocate alternate voices for a given sound e.g. "wide grand piano" or "dark grand piano" as defined in GM2, GS and XG sound sets, then it is necessary to insert Bank Change messages which must be implemented before the Program Change. Quite often simple sequencers lump all the Bank Selects and Program Changes together at the same MIDI timing point within the file (usually at MIDI clock 000.00.000). The free JazzWare sequencer is particularly bad at this. This can cause chaos and make the sound-module falter or even stop. So separating these messages by one MIDI pulse or tick and implementing them in the right sequence and channel-by-channel will prevent this problem. Examples coming right up.
-----------------------------------------------------------------------

Notice for beginners
Now, there's a great deal of information in those last paragraphs, and for some of you setting out on MIDI file modification for the first time, it's going to be a struggle understanding some of it. That's quite normal. Don't be put off. Go back and read the earlier articles, maybe have a cup of tea or coffee. Leave it for a day or two then come back and read it again. Let me re-assure you it will, eventually, start to make sense.

Or, of course, you can always ask me a question of clarification. That's what the forum is for.
--------------------------------------------------------------------------------
This is the 2nd section of a multi-part article. The nitty-gritty will follow in the next part.
--------------------------------------------------------------------------------

Since all the tables I created in the subsequent parts were written many years ago, and PHBB has been updated, I'll have to reformat them all.
If you're impatient for more, they can be seen in their original form over on my MIDI-tutor forum.

--------------------------------------------------------------------------------
Akai EWI4000s; Samson Graphite 49; Casio Privia PX-560; Yamaha AN1x; Novation X-Station 25;
Yamaha UW500; 2 x MU1000 + PLGs AN, DX, VL, PF & VL; VL70m; Roland SC8850; Steinberg UR22mkII;
Motu Micro Lite; E-MU 02 cardbus + 1616m. Finale 26; Sonar 7 PE and CbB; Yamaha XGworks v4; SQ01; SOL2;
Garritan GPO5; COMB2; JABB3; IO; Organs; Steinway; World; Harps; CFX Concert Grand.
User avatar
SysExJohn Ukraine
Global Moderator MIDI Specialist
Global Moderator MIDI Specialist
Posts: 1108
Joined: Tue Dec 02, 2008 12:00 am
16
Where Are You Located?: Wellingborough, UK.

VIP

Re: Up to Part 8 - What's in a MIDI file - added 24th May '23

Unread post by SysExJohn »

MIDI files; the setup measure. (part 3)

The nitty gritty detail
So let's take a look at how we might set up that first measure. First of all the Meta Data that defines some basic information about the file and also Time Signature and Tempo. Please notice that I've only put in the Hexadecimal equivalent of the message name where I perceive that it's absolutely necessary. If the sequencer you use doesn't allow you to enter the other Meta Data then, with all due respect, I think it's time you thought about getting one that does!

Added later
The data represented here is in its raw form. There is every likelihood that the sequencer you use will have a much more user friendly way of allowing you to enter that data. If so please use it. My dilemma has been how to represent it. However after entering the necessary commands in the easiest way your sequencer allows, you may still (should) go into the "list view" and check that the timing, i.e. the number of ticks apart that each event occurs, is correct.

Also please note that you don't have to slavishly follow my timing. These are just the meanderings and idiosyncrasies of a feller who has spent all of his life (40 years or so) dealing with deeply technical stuff. To me MIDI is exceptionally simple compared to some of the signalling or data interchange protocols I have had to work with. Use your own imagination and feel free to arrange things to suit your own use.

The only things that are really important are that you space certain messages apart, as I've indicated, by a certain amount of MIDI pulses or ticks.

N.B.More about bits, Bytes, binary and hexadecimal in a future article!

All the following examples assume that your sequencer is set to 480 ppqn (pulses per quarter note).
If your sequencer uses something different you'll have to change the number of pulses (pls) or ticks to suit.
Meas=measure.

Master track (track zero) or its equivalent.
Meas./Beat/TickMessage TypeValueHexadecimal
0001 01 000Tempo120FF 51 2F 07 A1 20
0001 01 000Tempo120FF 51 2F 07 A1 20
0001 01 000Time Signature4/4FF 58 04 04 02 18 08
0001 01 000Copyright Notice]© A.N.Other - - -
0001 01 000Sequence NameMy very own song - - -
Having entered the Meta Data we now need to reset the sound module hence a General MIDI System On message.

System set-up track sometimes sequencer track 1, sometimes track 17 but usually issued on MIDI channel 1.
Meas./Beat/TickMessage TypeValueHexadecimal
0001 01 000Univ non real timeGM System OnF0 7E 7F 09 01 F7
After this command it's necessary to wait typically 1/5th of a second (200 milliseconds (msec)).
------------------------------------------------------------------------------------

Next we need to enter just one of the following three commands (if we have a GM2, GS or XG sound module).
N.B. At 120 beats per minute each pulse in just fractionally longer than 1 thousandth of a second (1 msec).
Meas./Beat/TickMessage TypeValueHexadecimal
0001 01 200Univ non real timeGM2 System OnF0 7E 7F 09 03 F7
OR
0001 01 200XG Program SysExXG System On F0 43 10 4C 00 00 7E 00 F7
OR
0001 01 200GS Program SysExGS System OnF0 41 10 42 12 40 00 7F 00 41 F7
N.B. Only one of the above three messages is necessary.
If you're using JazzWare sequencer which defaults to 192ppqn then the message should be at 0001.01.080
---------------------------------------------------------------------------------------

The next thing I like to put in is a MIDI Master Volume message, it allows control of the overall output level of the sound module before any physical volume control. This allows us to turn down the volume of some files which may be too loud. I like to turn the physical volume control on my sound module full up and then get MIDI files sounding roughly the same loudness using this overall control at the start of every file. The setting of 110 is my setting, yours may be different. (Normally this defaults to 127 i.e. maximum output. "Hey, too LOUD man!")
Meas./Beat/TickMessage TypeValueHexadecimal
0001 01 200Univ non real timeGM2 System OnF0 7E 7F 09 03 F7
0001 01 250MIDI Master volume110F0 7F 7F 04 01 00 6E F7
Here are some useful values for you for the second to last byte (i.e. the 6E).
For volume = 90 to 95 change 6E to 5A thru 5F.
For volume = 96 to 105 change 6E to 60 thru 69.
For volume = 106 to 111 change 6E to 6A thru 6F.
For volume = 112 to 121 change 6E to 70 thru 79.
For volume = 122 to 127 change 6E to 7A thru 7F.
This should give you masses of volume range to play with.
----------------------------------------------------------------------

Global effects settings
Before we start to set up the individual instruments we may want to define the acoustic in which our MIDI instruments are going to play i.e. the kind of room or the FX. My personal preference is to put these messages within the second beat in the first measure. i.e. 0001.02.000. You don't have to be a control freak like me!
The following are GM2 reverb set-up messages.
Meas./Beat/TickMessage TypeValueHexadecimal
Meas/bt/plsMessage type ValueHexadecimal
0001 02 000Global Param CtrlReverb typeF0 7F 7F 04 05 01 01 01 01 01 00 04 F7
Wow! That's some SysEx message. Don't worry about all the preamble. The important stuff is in the two bytes before the F7. The 00 says "set reverb type" the 04 is the room type, in this case "large hall", the default setting. Here are the permitted GM2 values for the room type:
00 = Small Room
01 = Medium Room
02 = Large Room
03 = Medium Hall
04 = Large Hall
10 = Plate Reverb Simulation
N.B. If all you want is the default Large Hall then there's no need to input this at all. Phew!

The MIDI specification then goes on to say that an undefined period of time should be left in order to allow the sound module to implement the command.

Further messages of this kind can then be input to set up things like Reverb Time, Chorus Type and several other parameters that effect the overall sound. It's possible that I may cover these in a later article.
--------------------------------------------------------------------------------

First instrument track
Having reset the device and selected the global settings we want to use, we can now go on to set-up the instruments we want on each channel and all the other settings we want. i.e. channel volume, expression, pan, reverb amount for each instrument etc..If however, we're using anything more modern than a basic GM sound module then we ought to define exactly which, of the many, variations of sounds we want to use. We do this by issuing two Bank Select messages before the instrument select (Program Change) message.
The first bank select message, known as Most Significant Byte or MSB, selects whether it's a Melody (79Hex) or Rhythm (78Hex) Channel. The second, known as the Least Significant Byte or LSB, selects which of the several available banks is to be used.

These selections should be, at the very minimum, one pulse (or tick) apart as follows:
Meas./Beat/TickMessage TypeValueHexadecimal
Meas/bt/plsMessage typeValueHexadecimal
0001 03 000Channel controlBank sel MSBBn 00 79

N.B. n = the MIDI channel number. 0 for Channel 1, 1 for channel 2 and so on. Values 0 to 9 and A to F.
Meas./Beat/TickMessage TypeValueHexadecimal
Meas/bt/plsMessage typeValueHexadecimal
0001 03 001Channel controlBank sel LSBBn 20 xx
N.B.
n = the MIDI channel number. 0 for Channel 1, 1 for channel 2 and so on. Values 0 to 9 and A to F.
xx = the bank type. 00 would be basic GM sounds. For a few instruments there are up to 4 variations (01 to 04), in the SFX section up to 9 variations for a few.

If there's a general call for the info. I could publish a list of the GM2 sound banks.

Many (most) sequencers have the habit of piling all these messages into the very first pulse of the first beat of the first measure. If there are too many messages received at the same time it can cause a sound module at best to select the wrong voice or no voice at all, at worst to lock up completely! It's a very good idea, although extremely tedious, to spread these messages a pulse (or tick) apart.
Meas./Beat/TickMessage TypeValueHexadecimal
0001 03 002Program changeVoice selectCn yy
N.B.
n = the MIDI channel number. 0 for Channel 1, 1 for channel 2 and so on. Values 0 to 9 and A to F.
yy = the instrument or sound effect you want in hexadecimal. It's probably easier to let your sequencer do this for you and once it's done then input the bank selects and tweak the timing if necessary.
All settings shown are for GM2 sounds. GS and XG have a lot more settings.

Setting up a Rhythm Channel.
As said before, by default in GM, GM2, GS and XG, the rhythm channel is automatically set to channel 10. GM2 and GS devices should let you add an additional rhythm channel on channel 11. So you can have two kits. The Yamaha XG specification (v2)says that you can assign any normal melody channel to be a rhythm channel. I imagine that it will depend upon how old the Yamaha sound module is as to whether this is true.

Anyway, to change the kit away from the standard kit on a GM2 device on channel 10 you'll need the following messages:
Meas./Beat/TickMessage TypeValueHexadecimal
0001 02 000Channel controlBank sel MSBBA 00 78
0001 02 001Channel controlBank sel LSBBA 20 00
0001 02 002Program changeVoice selectCA yy
Where yy is 00H for the standard kit then:
yy = 09H for the "Room set"
yy = 11H for the "Power set"
yy = 19H for the "Electronic set"
yy = 1AH for the "Analog set"
yy = 21H for the "Jazz set"
yy = 29H for the "Brush set"
yy = 31H for the "Orchestra set"
and lastly
yy = 39H for the "SFX set"

N.B. These "additional" GM2 drum sets only typically add a few sounds by replacing sounds in the "Standard kit." e.g. "Note 36, Bass Drum 1 in the Standard kit is replaced with Jazz Kick 1 in the Jazz kit. The replaced instruments can be as few as two voices, as in the Jazz kit, to as many as twenty-four, as in the Orchestra kit.
All, that is, except the SFX kit which isn't really a drum kit at all.
The complete GM2 Standard kit has sixty-one instruments. I might post a list if anyone is interested.

An additional rhythm channel
If you want to add an additional rhythm channel then for GM2 it should be channel 11. It's done by sending a sequence of messages very similar to the set of three directly above but with the channel changed as follows:
Meas./Beat/TickMessage TypeValueHexadecimal
0001 02 000Channel controlBank sel MSBBB 00 78
0001 02 001Channel controlBank sel LSBBB 20 00
0001 02 002Program changeVoice selectCB yy
Where, again, yy represents the number of the drum kit as above.

Okay, I think that's fairly exhaustive not to say exhausting.

What goes in the second measure?
Well we can insert the following meta data into each channel if we so wish, but the normal place for this information is within the setup measure. You can, if you like, leave it out if it's the same as the setup measure. Then we may (usually do) need to change the Tempo to something other than the relatively quick 120 bpm that we used for the set-up measure. Again, your call, if it's the same. If a Key signature is needed then it can be put here at the beginning of the real music data. As follows:
Meas./Beat/TickMessage TypeValueHexadecimal
0002 01 000Time signature4/404 02 18 08
0002 01 000Tempo10009 27 C0
0002 01 000Key SignatureC maj00 00
Your MIDI file follows on from here

Next part to follow shortly.
Akai EWI4000s; Samson Graphite 49; Casio Privia PX-560; Yamaha AN1x; Novation X-Station 25;
Yamaha UW500; 2 x MU1000 + PLGs AN, DX, VL, PF & VL; VL70m; Roland SC8850; Steinberg UR22mkII;
Motu Micro Lite; E-MU 02 cardbus + 1616m. Finale 26; Sonar 7 PE and CbB; Yamaha XGworks v4; SQ01; SOL2;
Garritan GPO5; COMB2; JABB3; IO; Organs; Steinway; World; Harps; CFX Concert Grand.
Post Reply