[go: nahoru, domu]

Skip to content

Commit

Permalink
update README
Browse files Browse the repository at this point in the history
  • Loading branch information
icoxfog417 committed Apr 16, 2017
1 parent 200d8c0 commit 8d32c71
Show file tree
Hide file tree
Showing 4 changed files with 120 additions and 111 deletions.
115 changes: 4 additions & 111 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,10 @@ The model is ported from [ai-duet](https://github.com/googlecreativelab/aiexperi

![gui.PNG](./docs/gui.PNG)

### Additional Usage

* [Train Your own Model](https://github.com/icoxfog417/magenta_session/tree/master/scripts)
* [Session with your own model](https://github.com/icoxfog417/magenta_session/tree/master/server)

## Install

Expand Down Expand Up @@ -77,119 +81,8 @@ $ docker run -it --rm -p 80:8080 magenta-session

You can now play with `magenta_session` at `http://<docker-server-ipaddress>/`.


Session Now and Enjoy Music!

## Train your own model

You can create your own model by your MIDI files!
The procedure is almost same as [MelodyRNN](https://github.com/tensorflow/magenta/tree/master/magenta/models/melody_rnn). So please refer it as you need.

### 1. Prepare the MIDI data

Please gather your favorite MIDI files and store it to `data/raw`. You can find the MIDI files from following sites.

* [midiworld.com](http://www.midiworld.com/files/142/)
* [FreeMIDI.org](https://freemidi.org/)
* [The Lakh MIDI Dataset v0.1](http://colinraffel.com/projects/lmd/)

Game Music

* [Video Game Music Archive](http://www.vgmusic.com/)
* [THE MIDI SHRINE](http://www.midishrine.com/)

### 2. Create the NoteSequence from MIDI files.

Run the following command to convert the MIDI files to `NoteSequence`.

```
python scripts/data/create_note_sequences.py
```

Then, you can find `notesequences.tfrecord` in the `data/interim` folder.

### 3. Convert the NoteSequence to Dataset for Model

We mainly use [`MelodyRNN`](https://github.com/tensorflow/magenta/tree/master/magenta/models/melody_rnn#melody-rnn), So convert the Notesequence by its dataset script.

You have to specify what kinds of model do you use by `--config` argument.

* basic_rnn
* lookback_rnn
* attention_rnn

```
python scripts/data/convert_to_melody_dataset.py --config attention_rnn
```

At the same time, dataset is splited to training and evaluation. You can specify its rate by `--eval_ratio` (default is `0.1`).

### 4. Training the Model

If dataset is prepared, you can begin the training!
Run the `train_model.py` and specify the model parameters like following.

```
python scripts/models/train_model.py --config attention_rnn --hparams="{'batch_size':64,'rnn_layer_sizes':[64,64]}" --num_training_steps=20000
```

([parameter is almost same as original](https://github.com/tensorflow/magenta/tree/master/magenta/models/melody_rnn#train-and-evaluate-the-model))

You can watch the training state by TensorBoard.
Run the evaluation script...

```
python scripts/models/train_model.py --config attention_rnn --hparams="{'batch_size':64,'rnn_layer_sizes':[64,64]}" --num_training_steps=20000 --eval
```

Then invoke the TensorBoard and access [`http://localhost:6006`](http://localhost:6006).

```
tensorboard --logdir=models/logdir
```

![training.PNG](./docs/training.PNG)

### 5. Create the Model

After the training, then create your own model file.

```
python scripts/models/create_bundle.py --bundle_file my_model
```

Then, your model is stored in `models/` directory!

### 6. Generate the MIDI files

Now, let's try to generate the MIDI file! You can do it by below script.

```
python scripts/models/generate_midi.py --bundle_file=my_model --num_outputs=10 --num_steps=128 --primer_melody="[60]"
```

([parameter is almost same as original](https://github.com/tensorflow/magenta/tree/master/magenta/models/melody_rnn#generate-a-melody))

If it is succeeded, MIDI files will be stored at `data/generated` directory. Sounds Good? Enjoy!


### 7. Session with Your Model

I think you want to session with your own Musical Model!
If so, set the `MAGENTA_MODEL` environmental variable.

```
export MAGENTA_MODEL=my_model
```

Then startup **magenta session server**!

```
python server/server.py
```

You can play the Call & Response with your model!

## Dependencies

Python
Expand Down
Binary file added docs/process_overview.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
100 changes: 100 additions & 0 deletions scripts/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
# Train your own model

You can create your own model by your MIDI files!
The procedure is almost same as [MelodyRNN](https://github.com/tensorflow/magenta/tree/master/magenta/models/melody_rnn). So please refer it as you need.

![process_overview.png](../docs/process_overview.png)

1. Prepare the MIDI data
2. Create the NoteSequence from MIDI files
3. Convert the NoteSequence to Dataset for Model
4. Training the Model
5. Create the Model
6. Generate the MIDI files

## 1. Prepare the MIDI data

Please gather your favorite MIDI files and store it to `data/raw`. You can find the MIDI files from following sites.

* [midiworld.com](http://www.midiworld.com/files/142/)
* [FreeMIDI.org](https://freemidi.org/)
* [The Lakh MIDI Dataset v0.1](http://colinraffel.com/projects/lmd/)

Game Music

* [Video Game Music Archive](http://www.vgmusic.com/)
* [THE MIDI SHRINE](http://www.midishrine.com/)

## 2. Create the NoteSequence from MIDI files

Run the following command to convert the MIDI files to `NoteSequence`.

```
python scripts/data/create_note_sequences.py
```

Then, you can find `notesequences.tfrecord` in the `data/interim` folder.

## 3. Convert the NoteSequence to Dataset for Model

We mainly use [`MelodyRNN`](https://github.com/tensorflow/magenta/tree/master/magenta/models/melody_rnn#melody-rnn), So convert the Notesequence by its dataset script.

You have to specify what kinds of model do you use by `--config` argument.

* basic_rnn
* lookback_rnn
* attention_rnn

```
python scripts/data/convert_to_melody_dataset.py --config attention_rnn
```

At the same time, dataset is splited to training and evaluation. You can specify its rate by `--eval_ratio` (default is `0.1`).

## 4. Training the Model

If dataset is prepared, you can begin the training!
Run the `train_model.py` and specify the model parameters like following.

```
python scripts/models/train_model.py --config attention_rnn --hparams="{'batch_size':64,'rnn_layer_sizes':[64,64]}" --num_training_steps=20000
```

([parameter is almost same as original](https://github.com/tensorflow/magenta/tree/master/magenta/models/melody_rnn#train-and-evaluate-the-model))

You can watch the training state by TensorBoard.
Run the evaluation script...

```
python scripts/models/train_model.py --config attention_rnn --hparams="{'batch_size':64,'rnn_layer_sizes':[64,64]}" --num_training_steps=20000 --eval
```

Then invoke the TensorBoard and access [`http://localhost:6006`](http://localhost:6006).

```
tensorboard --logdir=models/logdir
```

![training.PNG](./docs/training.PNG)

## 5. Create the Model

After the training, then create your own model file.

```
python scripts/models/create_bundle.py --bundle_file my_model
```

Then, your model is stored in `models/` directory!

## 6. Generate the MIDI files

Now, let's try to generate the MIDI file! You can do it by below script.

```
python scripts/models/generate_midi.py --bundle_file=my_model --num_outputs=10 --num_steps=128 --primer_melody="[60]"
```

([parameter is almost same as original](https://github.com/tensorflow/magenta/tree/master/magenta/models/melody_rnn#generate-a-melody))

If it is succeeded, MIDI files will be stored at `data/generated` directory. Sounds Good? Enjoy!
16 changes: 16 additions & 0 deletions server/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Session with Your own Model

I think you want to session with your own Musical Model!
If so, set the `MAGENTA_MODEL` environmental variable.

```
export MAGENTA_MODEL=my_model
```

Then startup **magenta session server**!

```
python server/server.py
```

You can play the Call & Response with your model!

0 comments on commit 8d32c71

Please sign in to comment.