I am Tom Donaldson. I am a retired software developer who once studied Behavior Analysis at West Virginia University with Don Hake, Andy Lattal, and Kent Parker.
ABA annual convention, Opryland Hotel (photo by Genae Hall, 1984). Photo 1 front: Tom Donaldson, Julie Smith; back: Rosalind Burns. Photo 2: Dan Silberman, Cloyd Hyten, Marla Hall, Tom Donaldson.
At the moment, this BASimulation.org site represents my retirement hobby, perhaps a bit aspirational: behavior analytically based software simulations.
My goal is to develop, and publish as open source, simulations based on one particular neural network simulation: a biobehavioral selectionist neural network.
The earliest description I have found:
Donahoe, John W., Palmer, David C (1989). The Interpretation of Complex Human Behavior: Some Reactions to Parallel Distributed Processing, Edited by J. L. McClelland, D. E. Rumelhart, and The PDP Research Group. Journal of the Experimental Analysis of Behavior, 51, 399-416.
The one that sold me on the model:
Donahoe, John W., Burgos, Jose E., Palmer, David C (1993). A Selectionist Approach to Reinforcement. Journal of the Experimental Analysis of Behavior, 60, 17-40.
Behavior Analysis was a burning passion at one time.
I was seduced first by the electromechanical control equipment, you know, those six foot tall RETMA racks of components programmed with snap-lead connections?
Then I was totally sucked in by computer equipment, such as Art Snapper’s SKED system. And the fun I was having in classes for my computer science minor.
I loved Behavior Analysis. But I was uncontrollably addicted to writing software. I left grad school and developed software for a couple of decades.
Though I am retired, I am still utterly addicted to developing software. I still think behavior analysis could solve many problems if understood beyond the “I tried reinforcement but it did not work” level. Maybe games and fun simulations that taught/shaped behavior analytic skills would help move that understanding forward.
My wife, Kay Jones, was a special education teacher for twenty some years. One of her passions is teaching teachers, especially in how to deal with classroom behavior problems. Kay has a behavioral background from a now defunct Johns Hopkins masters level special education program. Kay is very good at analyzing situations. How to impart such skills to the “general population” of overworked teachers without sending them back to school? Games? Simulations? Simulated classroom situations in which students responded in realistic but limited ways to realistic but limited stimuli?
That is part of what inspired me to look for a way to model learning from a behavior analytic perspective.
Why the Biobehavioral Selectionist Neural Network
I came to choose this particular neural network after years of assiduously avoiding neural networks, and trying to come up with an alternative. All of my efforts evolved, or perhaps devolved, into something that looked suspiciously like a neural network. It’s something about the need to handle large numbers of inputs associated with large numbers of outputs and the need to manage changes in the probability of sets of outputs being activated based on sets of inputs. I think perhaps nature chose one of the only methods of handling this task: the organic neural net. In fact, I know of no other structure that can handle such adaptation so readily.
What little I understood of most computer science neural nets of a decade ago looked like awkward contrivances without any sound rationale based on biology. They achieved many of the same effects as does our organic system, but I was looking for something that simulated behavior change as described in the behavior analytic literature. I wanted something with enough fidelity to behavioral principles that it could be used as the basis of training in behavioral principles with as few contortions as possible.
Oh, and I wanted something unencumbered with restrictive patents. Something in the public domain.
I think my efforts perfectly set me up to appreciate Donahoe, Burgos, & Palmer, 1993. I was sold just a few paragraphs into it. The feedback system using simulated diffuse signals from a simulated hippocampus and simulated ventral tegmental area (VTA) made perfect sense, both from the perspective of simulating biology and from the perspective of computational simplicity.
I played with various models on and off for several years. I finally settled on the selectionist neural network somewhere around 2010. I started and stopped development in Objective-C multiple times as my efforts were “overcome by events”. Other projects, travel, an auto accident, moving, etc.
Finally on 7 January 2015 I decided that the only way I was going to accomplish anything was to start saying “No.” to other projects and activities. There was the added incentive of a really cool looking new language: Apple’s Swift. My favorite languages over the years have been such as Xerox Interlisp-D, Smalltalk, and Ruby. Swift reminded me of these fun languages. My primary work languages were C and C++. Swift did NOT remind me of these rather “heavy metal” rigid languages.
So I started to play, and have not worked more than a couple of days here and there on any other project since 7 January 2015.
I have written and discarded vast quantities of code, including a version of SCXML in Swift (remember Snapper’s SKED system I mentioned above?). Turns out state machines written in a textual language are not nearly as much fun as I remembered them being. Graphical editors using something like Universal Modeling Language (UML) diagrams seem to be required to make Harel charts usable and palatable, and I was not willing to write one. I may revive it one day: state machines are generally useful in simulations.
I have released a prototype/demo: BGL2015 Visualizer. It is a demo of an interface for visualizing the “learning” taking place in the selectionist neural network in this article:
Burgos, José E., García-Leal, Óscar (2015). Autoshaped choice in artificial neural networks: Implications for behavioral economics and neuroeconomics. Behavioural Processes, 114, 63-71.
Paywalled, but see the abstract for free: https://www.ncbi.nlm.nih.gov/pubmed/25662745
The application allows generation of new result sets, graphing of independent and dependent variables at each time step, and animation of a version of Figure 1 in the article.
The Near Future
This demo app lays the groundwork for a much more ambitious app that will include editors for designing neural nets, procedures to put them through their paces and to collect data, and editors to design graphical output.
I keep looking at a book laying here on a shelf: Schedules of Reinforcement. I wonder how hard it would be to implement neural net experiments that would produce pigeon-like behavior.
Hand-coding all of those networks and experimental procedures, data capture for them, graphs, etc., would be very tedious. I need a system of editors for defining neural networks, experimental procedures, data capture, plotting, etc.
A big plus: such a system would make working with selectionist neural networks much more approachable for non-programmers.