Day33: Garson's algorithm

Posted by csiu on March 29, 2017 | with: 100daysofcode, Machine Learning

Yesterday I explored various metrics to obtain variable importance. Today I focus on implementing Garson’s algorithm in Python.

Network weights

Before I can use Garson’s algorithm, I need to get the parameter weights from the neural network (ie. MLP with a single hidden layer trained by backpropagation).

# Save weights
np.savez(param_file, *lasagne.layers.get_all_param_values(network))

# Load weights
with np.load(param_file) as f:
    param_values = [f['arr_%d' % i] for i in range(len(f.files))]

Surprisingly, when I “pickle” a model trained on a GPU machine, I cannot un-pickle the resulting model file from a CPU machine.

My implementation of Garson’s algorithm

def garson(A, B):
    """
    Computes Garson's algorithm
    A = matrix of weights of input-hidden layer (rows=input & cols=hidden)
    B = vector of weights of hidden-output layer
    """
    B = np.diag(B)

    # connection weight through the different hidden node
    cw = np.dot(A, B)

    # weight through node (axis=0 is column; sum per input feature)
    cw_h = abs(cw).sum(axis=0)

    # relative contribution of input neuron to outgoing signal of each hidden neuron
    # sum to find relative contribution of input neuron
    rc = np.divide(abs(cw), abs(cw_h))
    rc = rc.sum(axis=1)

    # normalize to 100% for relative importance
    ri = rc / rc.sum()
    return(ri)