The strategy discussed above is coded as follows (the code file is available as Music_generation.ipynb in GitHub) along with the recommended audio file:
- Import the relevant packages and dataset:
!pip install mido music21import mido, glob, osfrom mido import MidiFile, MidiTrack, Messageimport numpy as npfrom music21 import converter, instrument, note, chordfrom keras.utils import np_utilsfrom keras.layers import Input, LSTM, Dropout, Dense, Activationfrom keras.models import Modelfname = '/content/nintendo.mid'
- Read the content of the file:
midi = converter.parse(fname)
The preceding code reads a stream of scores.
- Define a function that reads the stream of scores and extracts the notes from it (along with silence, if present ...