diff --git a/data/about-Weights-and-Biases.md b/data/about-Weights-and-Biases.md index 0549be39..dd387f68 100644 --- a/data/about-Weights-and-Biases.md +++ b/data/about-Weights-and-Biases.md @@ -1,6 +1,6 @@ ## Weights & Biases -- https://www.wandb.com/ +- [https://www.wandb.com/](https://www.wandb.com/) - W&B is a key piece of our fast-paced, cutting-edge, large-scale research workflow: great flexibility, performance, and user experience. - Experiment tracking for deep learning - Instrument training scripts @@ -23,8 +23,17 @@ - https://docs.wandb.com/docs/started.html - Examples - https://docs.wandb.com/docs/examples.html +- Code & concepts + - [Code snippets](./wandb/code-snippets.py) + - [Quick and Dirty CNN](./wandb/Quick-and-Dirty-CNN.py) + - [Activation Function](./wandb/Activation-Function.png) - Videos - Tutorial: https://www.wandb.com/classes/intro/overview +- Additional resources + - [Error caused by missing input_shape in your first layer](https://stackoverflow.com/questions/52690293/tensorflow-attributeerror-nonetype-object-has-no-attribute-original-name-sc) + - [Bloomberg summary colab notebook](https://colab.research.google.com/drive/1lfLR9WRzmjOMmnNmePys4-8WNfZ5xC90#scrollTo=wbjXyjFRaT1d) + - https://talktotransformer.com/ - Adam Daniel King's implementation of GPT-2 on the back of the PyTorch version + - ...for more [see this](./wandb/More-resources.md) --- @@ -38,8 +47,10 @@ - [ ] [Feature extraction: manual / no tools available] - [x] **[Model creation: available]** - [x] **[Execute experiments: available]** +- [x] **[Track experiments: available]** - [x] **[Hyper parameter tuning: available]** - [x] **[Model saving: available]** +- [x] **[Visualisations: available]** Back to [Programs and Tools](./programs-and-tools.md#programs-and-tools).
Back to [Data page](./README.md#data). \ No newline at end of file diff --git a/data/wandb/Activation-Function.png b/data/wandb/Activation-Function.png new file mode 100644 index 00000000..b47f38ec Binary files /dev/null and b/data/wandb/Activation-Function.png differ diff --git a/data/wandb/More-resources.md b/data/wandb/More-resources.md new file mode 100644 index 00000000..c89a8710 --- /dev/null +++ b/data/wandb/More-resources.md @@ -0,0 +1,74 @@ +## More Resources + +### Lecture, slide and code links + +- Lectures: https://www.wandb.com/classes/intro/overview +- Code + - https://github.com/lukas/ml-class + - https://github.com/lukas/ml-class/scikit/test-algorithm-cross-validation-dummy.py + - https://github.com/lukas/ml-class/blob/master/examples/notebooks/Lesson-4-Evaluating-Classifiers.ipynb + - https://github.com/lukas/vision-project + - https://github.com/mjhamiltonus/ml-class (all modules with notes) +- Hub sign-in page: https://hub.wandb.us/login +- Slack channel: https://bit.ly/wandb-forum +- Setup instructions: https://bit.ly/wbemotion or http://bit.ly/hub-setup +- Slides: https://storage.googleapis.com/wandb/Bloomberg%20Class%201.pdf +- Cheatsheets + - [ML Class Oct 2018 - CHEATSHEET.md](https://gist.github.com/vanpelt/b52f6f5360be626d2c23189d513f94de) + - https://gist.github.com/vanpelt/b52f6f5360be626d2c23189d513f94de#file-cheatsheet-md + - https://gist.github.com/vanpelt/b52f6f5360be626d2c23189d513f94de#saving-your-progress-optional +- W&B projects + - https://app.wandb.ai/bloomberg-class/imdb-classifier + - https://app.wandb.ai/dronedeploy/dronedeploy-aerial-segmentation/benchmark + - https://app.wandb.ai/mlclass/timeseries-nov1/runs/7bu2q1uv + - https://app.wandb.ai/qualcomm/timeseries-dec3/runs/kyphj85u + - https://app.wandb.ai/qualcomm/timeseries-sep13/runs/a3sfobyy + +### Books to train your LSTM on + +- [Top 100 - Project Gutenberg, 33000+ free ebooks online](http://www.gutenberg.org/browse/scores/top) +- Code: https://github.com/lukas/ml-class/tree/master/examples/lstm/text-gen +- [Complete works of Shakespeare](http://shakespeare.mit.edu/) +- [An interesting dataset](http://www.trumptwitterarchive.com/archive/none/tfff/1-1-2015_11-1-2018) +- [Tab-delimited Bilingual Sentence Pairs](http://www.manythings.org/anki/) + +### Bloomberg and LSTM classes (slides) + +- [Bloomberg Class 1](https://wb-ml.slack.com/files/UN2SL6G7Q/FNR5RJ2MS/bloomberg_class_1.pdf) +- [Bloomberg Class 2](https://wb-ml.slack.com/files/UN2SL6G7Q/FNE9193U0/bloomberg_class_2.pdf) +- [Bloomberg Class 3](https://wb-ml.slack.com/files/UN2SL6G7Q/FNE3Q7NN7/bloomberg_class_3.pdf) +- [Bloomberg Class 4 & 5](https://wb-ml.slack.com/files/UN2SL6G7Q/FNZQU6FE1/bloomberg_class_4.pdf) +- [Bloomberg Class 6](https://wb-ml.slack.com/files/UCBGFQ0RJ/FPG96CLTX/bloomberg_class_6.pdf) +- [Bloomberg Class 7](https://wb-ml.slack.com/files/UN2SL6G7Q/FPQQXNX5E/bloomberg_class_7.pdf) +- [Bloomberg Class 8](https://wb-ml.slack.com/files/UCAGCLW48/FPZ8MGYP6/bloomberg_class_8.pdf) +- [Bloomberg Class 8 - audio processing](https://wb-ml.slack.com/files/UCAGCLW48/FQARW1A30/class_8_audio_processing.pdf) +- [Bloomberg Class 9](https://wb-ml.slack.com/files/UCAGCLW48/FQHND8VJR/class_9_concept_review.pdf) +- [ML Class LSTM: Apr 2019](https://storage.googleapis.com/wandb/ML%20Class%20LSTM%20-%20Apr%2030%20-%202019.pdf) +- [ML Class LSTM: Nov](https://storage.googleapis.com/wandb-production.appspot.com/mlclass/ML%20Class%20LSTM%20-%20Nov1%20.pdf) +- [ML Class LSTM - Dec 3](https://drive.google.com/open?id=1gJvL9Nl67qQMS0pv9IscwwPrrofsmtY7) + +### Questions and answers + +Q: How do you know what a good learning rate is? + +A: To find the best learning rate, start with a very low value (10^-5) and slowly multiply the rate by a constant (e.g. 10) until you hit a very high value (e.g. 1). So you'll try 0.00001, …, 0.01, 0.1, 1. The best learning rate is usually half of the learning rate that causes the model to diverge. I’d also recommend using the Learning Rate finder proposed by Leslie Smith. It's an excellent way to find a good learning rate for most gradient optimizers (most variants of SGD) and works with most network architectures. https://arxiv.org/abs/1506.01186 + +Q: What is the meaning of Bottleneck features? + +A: It’s storing the output of the second to last layer of the network you’re transferring from and training a new network using it as input. + +So in usual, when we do transfer learning we re-train the last few layers but when we save bottleneck features, we only re-train the last 1 layer? Why is it called "bottleneck" though? The word is typically used when something is a slow moving part of a process, right? + +Training the last few layers is called fine tuning. Bottleneck features are called bottleneck because they are generally smaller than the input features. So using the network to generate them is like putting them through a bottleneck. + +### Misc resources + +- [Error caused by missing input_shape in your first layer](https://stackoverflow.com/questions/52690293/tensorflow-attributeerror-nonetype-object-has-no-attribute-original-name-sc) +- [Bloomberg summary colab notebook](https://colab.research.google.com/drive/1lfLR9WRzmjOMmnNmePys4-8WNfZ5xC90#scrollTo=wbjXyjFRaT1d) +- https://talktotransformer.com/ - Adam Daniel King's implementation of GPT-2 on the back of the PyTorch version +- [Automated Bug Triaging](http://bugtriage.mybluemix.net/#code) +- https://tensorspace.org/html/playground/lenet.html +- https://towardsdatascience.com/neural-network-architectures-156e5bad51ba +- https://jyothi-gupta.blogspot.com +- https://hackernoon.com/imagine-a-drunk-island-advice-for-finding-ai-use-cases-8d47495d4c3f +- https://github.com/jupyterlab/jupyterlab/issues/1146 diff --git a/data/wandb/Quick-and-Dirty-CNN.py b/data/wandb/Quick-and-Dirty-CNN.py new file mode 100644 index 00000000..557ff377 --- /dev/null +++ b/data/wandb/Quick-and-Dirty-CNN.py @@ -0,0 +1,35 @@ +# normalize data +X_train = X_train.astype('float32') / 255. +X_test = X_test.astype('float32') / 255. +N_train = X_train.shape[0] +N_test = X_test.shape[0] +X_train = X_train.reshape(N_train, 28,28,1) +X_test = X_test.reshape(N_test, 28,28,1) + +# create model +print ('test dimension:....', X_train.shape) +model=Sequential() +#model.add(Flatten(input_shape=(img_width, img_height))) +#model.add(Dense(128, activation="relu")) +#model.add(Dense(num_classes, activation="softmax")) + +#~~~~~~~~~~~~ + +con_width = 16 +conv_height = 16 +model.add(Conv2D(32,(con_width, conv_height), input_shape=(28, 28,1), activation='relu')) +model.add(MaxPooling2D(pool_size=(2, 2))) +model.add(Flatten()) +dense_layer_size = 128 +model.add(Dense(dense_layer_size, activation='relu')) +model.add(Dense(num_classes, activation='softmax')) +#~~~~~~~~~~~~~~~~ +# create model +#model=Sequential() +#model.add(Flatten(input_shape=(img_width, img_height))) +#model.add(Dense(num_classes)) +#model.compile(loss=config.loss, optimizer=config.optimizer, +# metrics=['accuracy']) + +model.compile(loss="categorical_crossentropy", optimizer="adam", + metrics=['accuracy']) \ No newline at end of file diff --git a/data/wandb/code-snippets.py b/data/wandb/code-snippets.py new file mode 100644 index 00000000..fc693315 --- /dev/null +++ b/data/wandb/code-snippets.py @@ -0,0 +1,795 @@ +### Code snippets + +model.summary() + +for layer in base_model.layers[:200]: + layer.trainable = False + +# normalize data +X_train = X_train.astype('float32') / 255. +X_test = X_test.astype('float32') / 255. +# create model +model=Sequential() +model.add(Flatten(input_shape=(img_width, img_height))) +model.add(Dense(128, activation="relu")) +model.add(Dense(num_classes, activation="softmax")) +model.compile(loss="categorical_crossentropy", optimizer="adam", + metrics=['accuracy']) + +### --- + +# normalize data +X_test = X_test.astype("float32") / 255. +X_train = X_train.astype("float32") / 255. + +# create model +model=Sequential() +model.add(Reshape((28,28,1), input_shape=(28,28))) +model.add(Conv2D(32, (3,3), padding='same', activation='relu')) +model.add(MaxPooling2D()) +model.add(Conv2D(64, (3,3), padding='same', activation='relu')) +model.add(MaxPooling2D()) +model.add(Conv2D(128, (3,3), padding='same', activation='relu')) +model.add(MaxPooling2D()) +model.add(Dropout(0.4)) +model.add(Flatten(input_shape=(img_width, img_height))) +model.add(Dropout(0.4)) +model.add(Dense(20, activation='relu')) +model.add(Dropout(0.4)) +model.add(Dense(num_classes, activation="softmax")) +model.compile(loss=config.loss, optimizer=config.optimizer, + metrics=['accuracy']) + +### --- +# A very simple perceptron for classifying american sign language letters +import signdata +import numpy as np +from keras.models import Sequential +from keras.layers import Dense, Flatten, Dropout, BatchNormalization, Conv2D, MaxPooling2D +from keras.callbacks import ReduceLROnPlateau, EarlyStopping +from keras.utils import np_utils +import wandb +from wandb.keras import WandbCallback +# logging code +run = wandb.init() +config = run.config +config.loss = "categorical_crossentropy" +config.optimizer = "adam" +config.first_layer_conv_width = 3 +config.first_layer_conv_height = 3 +config.epochs = 50 +# load data +(X_test, y_test) = signdata.load_test_data() +(X_train, y_train) = signdata.load_train_data() +img_width = X_test.shape[1] +img_height = X_test.shape[2] +# one hot encode outputs +y_train = np_utils.to_categorical(y_train) +y_test = np_utils.to_categorical(y_test) +num_classes = y_train.shape[1] +# you may want to normalize the data here.. +X_test = X_test.astype('float32') / 255. +X_train = X_train.astype('float32') / 255. +X_test = X_test.reshape((-1,img_width,img_height,1)) +X_train = X_train.reshape((-1,img_width,img_height,1)) +# create model +model = Sequential() +model.add(Conv2D(64, + (config.first_layer_conv_width, config.first_layer_conv_height), + padding='same', + input_shape=(img_width, img_height,1), + activation='relu')) +#model.add(BatchNormalization()) +model.add(MaxPooling2D(pool_size=(2, 2))) +model.add(Dropout(0.2)) +model.add(Conv2D(64, + (config.first_layer_conv_width, config.first_layer_conv_height), + padding='same', + input_shape=(img_width, img_height,1), + activation='relu')) +#model.add(BatchNormalization()) +model.add(MaxPooling2D(pool_size=(2, 2))) +model.add(Dropout(0.2)) +model.add(Conv2D(64, + (config.first_layer_conv_width, config.first_layer_conv_height), + padding='same', + input_shape=(img_width, img_height,1), + activation='relu')) +model.add(MaxPooling2D(pool_size=(2, 2))) +model.add(Dropout(0.2)) +model.add(Flatten(input_shape=(img_width, img_height))) +model.add(Dense(128, activation='relu')) +model.add(Dropout(0.2)) +model.add(Dense(num_classes, activation='softmax')) +model.compile(loss=config.loss, optimizer=config.optimizer, + metrics=['accuracy']) +earlystopping = EarlyStopping(monitor='val_loss', min_delta=0, patience=6, verbose=1, mode='auto', baseline=None, restore_best_weights=True) +reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=2, verbose=1, mode='auto', min_delta=0.0001, cooldown=0, min_lr=0.0001) +# Fit the model +model.fit(X_train, y_train, epochs=config.epochs, validation_data=(X_test, y_test), callbacks=[WandbCallback(data_type="image", labels=signdata.letters),reduce_lr,earlystopping]) + +### --- + +from keras.optimizers import RMSprop +trainX = trainX[:, :, np.newaxis] +testX = testX[:, :, np.newaxis] +config.look_back=4 +# create and fit the RNN +model = Sequential() +model.add(SimpleRNN(5, input_shape=(config.look_back,1 ))) +model.add(Dense(1)) +model.compile(loss='mae', optimizer=RMSprop(lr=0.1)) +model.fit(trainX, trainY, epochs=1000, batch_size=20, validation_data=(testX, testY), callbacks=[WandbCallback(), PlotCallback(trainX, trainY, testX, testY, config.look_back)]) + +### --- + +cd ~/ml-class +cat **/*.py > ml-class/lstm/text-gen/code.txt +cd ml-class/lstm/text-gen +python char-gen.py code.txt + +### --- + +model = Sequential() +model.add(GRU(config.hidden_nodes, input_shape=(config.maxlen, len(chars)))) +model.add(Dense(len(chars), activation='softmax')) +model.compile(loss='categorical_crossentropy', optimizer="rmsprop") + +### --- + +model.add(Bidirectional(LSTM(config.hidden_dims, activation="sigmoid"))) + +### --- + +from keras.layers import Conv1D, Flatten, MaxPool1D +​ +​ +model = Sequential() +model.add(Embedding(config.vocab_size, + config.embedding_dims, + input_length=config.maxlen)) +model.add(Dropout(0.5)) +model.add(Conv1D(config.filters, + config.kernel_size, + padding='valid', + activation='relu')) +#~~~~~~~custom maxpool and Con1D +model.add((MaxPool1D(4))) +model.add(Dropout(0.5)) +model.add(Conv1D(config.filters, + config.kernel_size, + padding='valid', + activation='relu')) +model.add((MaxPool1D(4))) +model.add(Dropout(0.5)) +#~~~~~~~~~~custom LSTM layer~~~~~~ +​ +model.add(LSTM(config.hidden_dims, activation="relu", , return_sequences=True)) + +-- + +from keras.preprocessing import sequence +from keras.models import Sequential +from keras.layers import Dense, Dropout, Activation, MaxPooling1D +from keras.layers import Embedding, LSTM +from keras.layers import Conv1D, Flatten +from keras.datasets import imdb +import wandb +from wandb.keras import WandbCallback +import imdb +import numpy as np +from keras.preprocessing import text + +wandb.init() +config = wandb.config + +# set parameters: +config.vocab_size = 1000 +config.maxlen = 1000 +config.batch_size = 32 +config.embedding_dims = 50 +config.filters = 250 +config.kernel_size = 3 +config.hidden_dims = 250 +config.epochs = 10 + +(X_train, y_train), (X_test, y_test) = imdb.load_imdb() +print("Review", X_train[0]) +print("Label", y_train[0]) + +tokenizer = text.Tokenizer(num_words=config.vocab_size) +tokenizer.fit_on_texts(X_train) +X_train = tokenizer.texts_to_sequences(X_train) +X_test = tokenizer.texts_to_sequences(X_test) + +X_train = sequence.pad_sequences(X_train, maxlen=config.maxlen) +X_test = sequence.pad_sequences(X_test, maxlen=config.maxlen) +print(X_train.shape) +print("After pre-processing", X_train[0]) + +model = Sequential() +model.add(Embedding(config.vocab_size, + config.embedding_dims, + input_length=config.maxlen)) +model.add(Dropout(0.5)) + +model.add(Conv1D(config.filters, + config.kernel_size, + padding='valid', + activation='relu')) + +model.add(MaxPooling1D((2))) # size is 499,250 + +model.add(Conv1D(config.filters, + config.kernel_size, + padding='valid', + activation='relu')) + +model.add(Flatten()) + +model.add(Dropout(0.5)) # + + +model.add(Dense(config.hidden_dims, activation='relu')) +model.add(Dropout(0.5)) + + +model.add(Dense(1, activation='sigmoid')) + +model.compile(loss='binary_crossentropy', + optimizer='adam', + metrics=['accuracy']) + +model.fit(X_train, y_train, + batch_size=config.batch_size, + epochs=config.epochs, + validation_data=(X_test, y_test), callbacks=[WandbCallback()]) + +### --- + +# create model +model=Sequential() +#model.add(Flatten(input_shape=(img_width, img_height))) +model.add(Reshape((28,28,1), input_shape=(img_width, img_height))) +model.add(Conv2D(8, (3,3) )) +model.add(Dropout(0.3)) +model.add(Dense(100, activation="relu")) +model.add(Dropout(0.3)) +model.add(Dense(num_classes, activation="softmax")) +model.compile(loss=config.loss, optimizer=config.optimizer, + metrics=['accuracy']) + + +# A very simple perceptron for classifying american sign language letters +import signdata +import numpy as np +from keras.models import Sequential +from keras.layers import Dense, Flatten, Dropout, Conv2D, Reshape, MaxPooling2D +from keras.utils import np_utils +import wandb +from wandb.keras import WandbCallback +# logging code +run = wandb.init() +config = run.config +config.loss = "categorical_crossentropy" +config.optimizer = "adam" +config.epochs = 10 +# load data +(X_test, y_test) = signdata.load_test_data() +(X_train, y_train) = signdata.load_train_data() +img_width = X_test.shape[1] +img_height = X_test.shape[2] +# one hot encode outputs +y_train = np_utils.to_categorical(y_train) +y_test = np_utils.to_categorical(y_test) +num_classes = y_train.shape[1] +X_train = X_train.astype('float32') / 255. +X_test = X_test.astype('float32') / 255. +# create model +model=Sequential() +#model.add(Flatten(input_shape=(img_width, img_height))) +model.add(Reshape((28,28,1), input_shape=(img_width, img_height))) +model.add(Dropout(0.3)) +model.add(Conv2D(8, (3,3) )) +model.add(MaxPooling2D(2,2)) +model.add(Dropout(0.3)) +model.add(Conv2D(8, (3,3) )) +model.add(MaxPooling2D(2,2)) +model.add(Flatten()) +model.add(Dropout(0.3)) +model.add(Dense(50, activation="relu")) +model.add(Dropout(0.3)) +model.add(Dense(num_classes, activation="softmax")) +model.compile(loss=config.loss, optimizer=config.optimizer, + metrics=['accuracy']) +# Fit the model +model.fit(X_train, y_train, epochs=config.epochs, validation_data=(X_test, y_test), callbacks=[WandbCallback(data_type="image", labels=signdata.letters)]) +#print(model.predict(X_train[:2])) + +@vanpelt we add the empty 3rd dimension because Conv2D always expects 3 dimensions. This is because your doing convolutions on multiple channels. For instance color images have Red Green and Blue channels as the 3rd dimension. +### --- +cd ~/ml-class/lstm/imdb-classifier + +bash download-imdb.sh +### --- + +from keras.preprocessing import sequence +from keras.models import Sequential +from keras.layers import Dense, Dropout, Activation +from keras.layers import Embedding, LSTM +from keras.layers import Conv1D, Flatten, MaxPooling1D, TimeDistributed +from keras.datasets import imdb +import wandb +from wandb.keras import WandbCallback +import imdb +import numpy as np +from keras.preprocessing import text +wandb.init() +config = wandb.config +# set parameters: +config.vocab_size = 1000 +config.maxlen = 1000 +config.batch_size = 50 +config.embedding_dims = 50 +config.filters = 250 +config.kernel_size = 3 +config.hidden_dims = 250 +config.epochs = 10 +(X_train, y_train), (X_test, y_test) = imdb.load_imdb() +print("Review", X_train[0]) +print("Label", y_train[0]) +tokenizer = text.Tokenizer(num_words=config.vocab_size) +tokenizer.fit_on_texts(X_train) +X_train = tokenizer.texts_to_sequences(X_train) +X_test = tokenizer.texts_to_sequences(X_test) +X_train = sequence.pad_sequences(X_train, maxlen=config.maxlen) +X_test = sequence.pad_sequences(X_test, maxlen=config.maxlen) +print(X_train.shape) +print("After pre-processing", X_train[0]) +cnn = Sequential() +cnn.add(Embedding(config.vocab_size, + config.embedding_dims, + input_length=config.maxlen)) +cnn.add(Dropout(0.5)) +cnn.add(Conv1D(config.filters, + config.kernel_size, + padding='valid', + activation='relu')) +cnn.add(Dropout(0.5)) +cnn.add(MaxPooling1D((2))) +cnn.add(Flatten()) + +model = Sequential() +model.add(TimeDistributed(cnn)) +model.add(LSTM(config.hidden_dims, activation="sigmoid")) +model.add(Dense(1, activation='sigmoid')) +model.compile(loss='binary_crossentropy', + optimizer='adam', + metrics=['accuracy']) +model.fit(X_train, y_train, + batch_size=config.batch_size, + epochs=config.epochs, + validation_data=(X_test, y_test), callbacks=[WandbCallback()]) + +### --- + +# A very simple perceptron for classifying american sign language letters +import signdata +import numpy as np +from keras.models import Sequential +from keras.layers import Dense, Flatten, Dropout, Conv2D, MaxPooling2D, Reshape +from keras.utils import np_utils +import wandb +from wandb.keras import WandbCallback + +# logging code +run = wandb.init() +config = run.config +config.loss = "categorical_crossentropy" +config.optimizer = "adam" +config.epochs = 10 + +# load data +(X_test, y_test) = signdata.load_test_data() +(X_train, y_train) = signdata.load_train_data() + +img_width = X_test.shape[1] +img_height = X_test.shape[2] + +# one hot encode outputs +y_train = np_utils.to_categorical(y_train) +y_test = np_utils.to_categorical(y_test) + +num_classes = y_train.shape[1] + +# you may want to normalize the data here.. + +# normalize data +X_train = X_train.astype('float32') / 255. +X_test = X_test.astype('float32') / 255. + +# create model +model=Sequential() +model.add(Reshape((28,28,1),input_shape=(28, 28))) +model.add(Conv2D(32, + (3,3), + activation='relu')) +model.add(MaxPooling2D(pool_size=(2, 2))) +model.add(Flatten(input_shape=(img_width, img_height))) +model.add(Dense(num_classes, activation="softmax")) +model.compile(loss=config.loss, optimizer=config.optimizer, + metrics=['accuracy']) + +# Fit the model +model.fit(X_train, y_train, epochs=10, validation_data=(X_test, y_test), + callbacks=[WandbCallback(data_type="image", labels=signdata.letters)]) +#print("Target", y_train[:2]) +#print("Predictions", model.predict(X_train[:2])) + +### --- + +# create and fit the RNN +model = Sequential() +model.add(SimpleRNN(100, input_shape=(config.look_back,1))) +model.compile(loss='mae', optimizer='rmsprop') +model.fit(trainX, trainY, epochs=1000, batch_size=20, validation_data=(testX, testY), callbacks=[WandbCallback(), PlotCallback(trainX, trainY, testX, testY, config.look_back)]) + +@Lukas suggests: +# create and fit the RNN +model = Sequential() +model.add(SimpleRNN(5, input_shape=(config.look_back,1 ))) +model.add(Dense(1)) + +### --- + +wget http://www.gutenberg.org/cache/epub/50742/pg50742.txt + +### --- + +model.add(LSTM(25, activation="sigmoid", return_sequences=True)) +model.add(LSTM(25, activation="sigmoid", go_backwards=True)) + +### +### Shows only one layer +### The interface doesn’t show non sequential parts of the graph +### It's just a simplification in the implementation +### + +### --- + +lstm with drop out code: +## create model +model = Sequential() +model.add(Embedding(config.vocab_size, 100, input_length=config.maxlen, weights=[embedding_matrix], trainable=False)) +# +model.add(Bidirectional(LSTM(50, activation="sigmoid", dropout=0.50, recurrent_dropout=0.50))) +model.add(Dense(1, activation='sigmoid')) +model.compile(loss='binary_crossentropy', + optimizer='rmsprop', + metrics=['accuracy']) + +### --- + +X_test = tokenizer.texts_to_sequences(["great movie"]) +X_test = sequence.pad_sequences(X_test, maxlen=config.maxlen) +model.predict(X_test) + +### --- + +phraselist = ['great movie', 'terrible movie'] +phrasetokens = tokenizer.texts_to_sequences(phraselist) +phraseseq = sequence.pad_sequences(phrasetokens, maxlen=config.maxlen) +model.predict(phraseseq) +result = model.predict(X_train) +res_error = [val1 - val2 for val1, val2, in zip (y_train, result)] +res_idx_max_error = res_error.index(max(res_error)) +res_idx_min_error = res_error.index(min(res_error)) +print(str(res_idx_max_error), str(res_idx_min_error)) +(X_train2, y_train2), (X_test2, y_test2) = imdb.load_imdb() +print(X_train2[res_idx_max_error]) +print(X_train2[res_idx_min_error]) +print(X_train2[5244]) ; This is simply the funniest movie I've seen in a long time. The bad acting, bad script, bad scenery, bad costumes, bad camera work and bad special effects are so stupid that you find yourself reeling with laughter.

So it's not gonna win an Oscar but if you've got beer and friends round then you can't go wrong. +:joy: +1 + +print(X_train2[20143]) ; I very much looked forward to this movie. Its a good family movie; however, if Michael Landon Jr.'s editing team did a better job of editing, the movie would be much better. Too many scenes out of context. I do hope there is another movie from the series, they're all very good. But, if another one is made, I beg them to take better care at editing. This story was all over the place and didn't seem to have a center. Which is unfortunate because the other movies of the series were great. I enjoy the story of Willie and Missy; they're both great role models. Plus, the romantic side of the viewers always enjoy a good love story. + +### --- + +x_train = (counts[0:6000]) +y_train = pd.get_dummies(fixed_target[0:6000]) +y_train.values + +x_test = (counts[6000:]) +y_test = pd.get_dummies(fixed_target[6000:]) + + +from keras.datasets import mnist +from keras.models import Sequential +from keras.layers import Dense, Flatten +import wandb +from wandb.keras import WandbCallback + +num_classes = 4 +length=counts.shape[1] +# create model +model = Sequential() +model.add(Dense(num_classes, input_shape=(length,),activation='softmax')) +model.compile(loss='categorical_crossentropy', optimizer='adam', + metrics=['accuracy']) + +# Fit the model +model.fit(x_train,y_train, epochs=10, validation_data=(x_test, y_test)) + +### --- + +git clone https://github.com/lukas/keras-audio + +### --- + +config.epochs = 400 +config.batch_size = 100 +config.first_layer_conv_width = 5 +num_conv = 64 +model.add(Conv2D(num_conv, + (config.first_layer_conv_width, config.first_layer_conv_height), + input_shape=(config.buckets, config.max_len, channels), + activation='relu', padding='same')) +model.add(Dropout(0.2)) +model.add(Conv2D(num_conv, + (config.first_layer_conv_width, config.first_layer_conv_height), + #input_shape=(config.buckets, config.max_len, channels), + activation='relu', padding='same')) +model.add(MaxPooling2D(pool_size=(2, 2))) + +model.add(Dropout(0.2)) + +model.add(Conv2D(num_conv, + (config.first_layer_conv_width, config.first_layer_conv_height), + #input_shape=(config.buckets, config.max_len, channels), + activation='relu', padding='same')) + +model.add(Dropout(0.2)) +model.add(Conv2D(num_conv, + (config.first_layer_conv_width, config.first_layer_conv_height), + #input_shape=(config.buckets, config.max_len, channels), + activation='relu', padding='same')) +model.add(MaxPooling2D(pool_size=(2, 2))) + +model.add(Dropout(0.2)) + +model.add(Conv2D(num_conv, + (config.first_layer_conv_width, config.first_layer_conv_height), + #input_shape=(config.buckets, config.max_len, channels), + activation='relu', padding='same')) + +model.add(Dropout(0.2)) +model.add(Conv2D(num_conv, + (config.first_layer_conv_width, config.first_layer_conv_height), + #input_shape=(config.buckets, config.max_len, channels), + activation='relu', padding='same')) +model.add(MaxPooling2D(pool_size=(2, 2))) + +model.add(Dropout(0.2)) +model.add(Flatten()) +model.add(Dense(num_dense, activation='relu')) +model.add(Dropout(0.2)) +model.add(Dense(num_dense, activation='relu')) +model.add(Dropout(0.2)) +model.add(Dense(num_classes, activation='softmax')) +model.compile(loss="categorical_crossentropy", + optimizer="adam", + metrics=['accuracy']) + + + +### --- + +from preprocess import * +import keras +from keras.models import Sequential +from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D, Reshape +from keras.layers import Activation, BatchNormalization +from keras import regularizers +from keras.callbacks import LearningRateScheduler, ReduceLROnPlateau +from keras.utils import to_categorical +import wandb +from wandb.keras import WandbCallback +def lr_schedule(epoch): + lrate = 0.001 + if epoch > 75: + lrate = 0.0005 + elif epoch > 100: + lrate = 0.0003 + return lrate +reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, min_lr=0.001) +wandb.init() +config = wandb.config +config.max_len = 11 +config.buckets = 20 +config.filters = 250 +config.kernel_size = 3 +# Save data to array file first +#save_data_to_array(max_len=config.max_len, n_mfcc=config.buckets) +labels=["bed", "happy", "cat"] +# # Loading train set and test set +X_train, X_test, y_train, y_test = get_train_test() +# # Feature dimension +channels = 1 +config.epochs = 50 +config.batch_size = 100 +num_classes = 3 +X_train = X_train.reshape(X_train.shape[0], config.buckets, config.max_len, channels) +X_test = X_test.reshape(X_test.shape[0], config.buckets, config.max_len, channels) +y_train_hot = to_categorical(y_train) +y_test_hot = to_categorical(y_test) +weight_decay = 1e-4 +model = Sequential() +model.add(Conv2D(config.filters, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay), input_shape=X_train.shape[1:], activation='relu')) +model.add(BatchNormalization()) +model.add(Conv2D(config.filters, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay), activation='relu')) +model.add(BatchNormalization()) +model.add(MaxPooling2D(pool_size=(2,2))) +model.add(Dropout(0.3)) +model.add(Conv2D(config.filters, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay), input_shape=X_train.shape[1:], activation='relu')) +model.add(BatchNormalization()) +model.add(Conv2D(config.filters, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay), activation='relu')) +model.add(BatchNormalization()) +model.add(MaxPooling2D(pool_size=(2,2))) +model.add(Dropout(0.3)) +model.add(Flatten(input_shape=(config.buckets, config.max_len, channels))) +model.add(Dense(num_classes, activation='softmax')) +model.compile(loss="categorical_crossentropy", + optimizer="rmsprop", + metrics=['accuracy']) +model.fit(X_train, y_train_hot, batch_size=config.batch_size, epochs=config.epochs, validation_data=(X_test, y_test_hot), callbacks=[WandbCallback(data_type="image", labels=labels), reduce_lr]) + +### --- + +def create_categorical_decoder(): + ''' + Create the decoder with an optional class appended to the input. + ''' + decoder_input = layers.Input(shape=(wandb.config.latent_dim,)) + label_input = layers.Input(shape=(len(wandb.config.labels),)) + if wandb.config.conditional: + x = layers.concatenate([decoder_input, label_input], axis=-1) + else: + x = decoder_input + x = layers.Dense(128, activation='relu')(x) + x = layers.Dense(img_size * img_size, activation='relu')(x) + x = layers.Reshape((img_size, img_size, 1))(x) + x = layers.Conv2D(64, 3, activation="relu", padding="same")(x) + x = layers.Conv2D(1, 3, activation="sigmoid", padding="same")(x) +​ + return Model([decoder_input, label_input], x, name='decoder') + +def create_encoder(input_shape): + ''' + Create an encoder with an optional class append to the channel. + ''' + encoder_input = layers.Input(shape=input_shape) + label_input = layers.Input(shape=(len(wandb.config.labels),)) + #x = layers.Flatten()(encoder_input) + if wandb.config.conditional: + x = layers.Lambda(concat_label, name="c")([encoder_input, label_input]) + #x = layers.concatenate([x, label_input], axis=-1) +​ + x = layers.Conv2D(64, 3, activation="relu")(x) + x = layers.MaxPooling2D()(x) + x = layers.Conv2D(32, 3, activation="relu")(x) + x = layers.Flatten()(x) + x = layers.Dense(128, activation="relu")(x) + output = layers.Dense(wandb.config.latent_dim, activation="relu")(x) +​ + return Model([encoder_input, label_input], output, name='encoder') + +### --- + +# A very simple perceptron for classifying american sign language letters +import signdata +import numpy as np +from keras.models import Sequential +from keras.layers import Dense, Flatten, Dropout, Conv2D, MaxPooling2D, Reshape +from keras.utils import np_utils +import wandb +from wandb.keras import WandbCallback + +# logging code +run = wandb.init() +config = run.config +config.loss = "categorical_crossentropy" +config.optimizer = "adam" +config.epochs = 10 + +# load data +(X_test, y_test) = signdata.load_test_data() +(X_train, y_train) = signdata.load_train_data() + +img_width = X_test.shape[1] +img_height = X_test.shape[2] + +# one hot encode outputs +y_train = np_utils.to_categorical(y_train) +y_test = np_utils.to_categorical(y_test) + +num_classes = y_train.shape[1] + +# you may want to normalize the data here.. + +# normalize data +X_train = X_train.astype('float32') / 255. +X_test = X_test.astype('float32') / 255. + +X_train = X_train.reshape( + X_train.shape[0], img_width, img_height, 1) +X_test = X_test.reshape( + X_test.shape[0], img_width, img_height, 1) + +# create model +model = Sequential() +model.add(Conv2D(32, (3,3), input_shape=(img_width, img_height, 1))) +model.add(MaxPooling2D(pool_size=(2, 2))) +model.add(Conv2D(32, (3,3))) +model.add(MaxPooling2D(pool_size=(2, 2))) +model.add(Flatten()) +model.add(Dense(100, activation="relu")) +model.add(Dropout(0.2)) +model.add(Dense(num_classes, activation="softmax")) +model.compile(loss=config.loss, optimizer=config.optimizer, + metrics=['accuracy']) + +# Fit the model +model.fit(X_train, y_train, epochs=10, validation_data=(X_test, y_test), + callbacks=[WandbCallback(data_type="image", labels=signdata.letters)]) + + +### --- + +# create and fit the RNN +model = Sequential() +model.add(SimpleRNN(10, input_shape=(config.look_back, 1))) +model.add(Dense(1)) +model.compile(loss='mae', optimizer='rmsprop', metrics=['accuracy']) +model.fit(trainX, + trainY, + epochs=100, + batch_size=20, + validation_data=(testX, testY), + callbacks=[ + + PlotCallback(trainX, trainY, testX, testY, config.look_back), + WandbCallback()] + ) + +### --- + +import keras +​ +model = keras.models.Sequential() +#TO stack multiple LSTM's add return_sequences=True +model.add(keras.layers.LSTM(128, return_sequences=True, input_shape=(10,1)) +model.add(keras.layers.LSTM(64)) +model.add(Dense(1, activation='sigmoid')) + + +### --- + +self.model.add(LSTM(self.nb_units, + input_shape=(X_scl_re.shape[1], X_scl_re.shape[2]))) +self.model.add(Dense(1)) +self.model.compile(loss='mae', optimizer='adam') +self.model.fit(X_scl_re, y_scl, + epochs =self.epochs, + batch_size=self.batch_size, + verbose =self.verbose, + shuffle =False) + + +Q: I'm using the network below for time series. The numpy array X_scl_re has shape (n_samples, timesteps, n_features). In my case timesteps=1. My question is when is tilmestep greater than 1 and what does this mean? + +A: In your example timesteps would be the number of sequences to process. Increasing this would allow the network to see longer range patterns. You can think of it as the amount of time the network get's to look back to. +Or in the case of the IMDB dataset the number of words the network get's to see to make a decision. + +### --- +