This is the secret web page about particle networks!
A particle network being fit with RPROP to the MNIST digits data set. Architecture is 784 input units (white), 32 hidden units (blue), and 10 output units (red). Hyperbolic tangent activation functions and softmax output layer with categorical cross entropy loss function. The final fit is about 93% accurate. This particle network model has 3388 adjustable parameters, whereas the corresponding Multi-layer perceptron model has 25450 adjustable parameters, an 86% savings!
A particle network (PN) is a variant of the classic Multi-layer Perceptron (MLP). The main distinguishing feature is that PNs conceptualize units (or nodes) in a neural network as a particle with a charge and phase residing in some space. Particles interact in a pairwise fashion to implicitly constitute the matrix elements of inter-unit connections, as opposed to the weights found in MLPs. Otherwise, information traverses the PN layer by layer and transformed via activation functions just like in MLPs.
The code for PNs is part of my neural network sandbox code, Calrissian, which is the main engine in part of a parent project, BrainSparks, which I am using to explore parallelization of neural networks. All the code for BrainSparks and Calrissian is on my GitHub account here.
I am working on writing a draft manuscript to document PNs. If things go well, I might publish it. It is incomplete at the moment, but you can read it here. Therein, you'll find the current formulation of the PN model and its analytic backpropagation gradient.