1. Import libraries: Let's start with importing required libraries:
import pandas as pd
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn import metrics
import keras
from keras.utils import np_utils
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Dense, Activation
2. Load the data:
3. Data Preparation: for data preparation, we use one-hot encoding to convert the target into a vector that is all-zeros except for a 1 at the index corresponding to the class of the sample.
df_array = df.values
X = df_array[:, 1:-1].astype(np.float32)
labels = df_array[:, -1]
print(np.unique(labels))
encoder = LabelEncoder()
encoder.fit(labels)
y = encoder.transform(labels).astype(np.int32)
Y = np_utils.to_categorical(y)
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=.33,random_state=2018)
4. Define model: first we look into the input dimension (i.e. number of attributes) and how many categories we are predicting (output dimension):
dims = X_train.shape[1]
print(dims, 'dims')
nb_classes = y_train.shape[1]
print(nb_classes, 'classes')
Similar to previous example, we define the model as a sequence of layers, where we use Keras's sequential function as a placeholder to put in the layers:
kmodel = Sequential()
kmodel.add(Dense(nb_classes, input_shape=(dims,), activation='sigmoid'))
kmodel.add(Activation('softmax'))
kmodel.summary()
5. Compile model: