Suppose the neural nets were constructed as lisp list-based data structures rather than matrices.<p>This allows quite a few interesting ideas:<p>1) it is possible to write functions and macros that can self-modify the network<p>2) if there are subnets that have rich internal structure but sparse I/O that indicates possible "clustered concepts"<p>3) sections of the neural nets can be replaced by functions taking incoming connections and returning multiple-value outputs.<p>4) explicit programs, such as expert systems, can be embedded in the network<p>5) "long term memory" can be explicitly kept in the structure, served up by functions<p>6) "backward reasoners" can take incorrect output, walk back through the data structure and replace sections contributing to the incorrect output (aka debugging)<p>7) explicit function calls can be inserted anywhere to do things like display information, call interface functions e.g. extract sensor data and insert the result into the process. This would be useful for robot joint control, robot hearing, etc.<p>We have artificially limited ourselves to these "black box" matrix-based solutions. The list-based solutions can do the same matrix operations but with a much richer data structure.