Conceptors: an easy introduction
Abstract
Conceptors provide an elementary neurocomputational mechanism which sheds a fresh and unifying light on a diversity of cognitive phenomena. A number of demanding learning and processing tasks can be solved with unprecedented ease, robustness and accuracy. Some of these tasks were impossible to solve before. This entirely informal paper introduces the basic principles of conceptors and highlights some of their usages.
1 The big picture
The subjective experience of a functioning brain is wholeness: I!
Scientific analysis explodes this unity into a myriad of phenomena, functions, mechanisms and objects: abstraction, action, action potential, actuator, adaptation, adult, affect, aging, algorithm, amygdala, …: just a quick pick from the subject indices of psychology, neuroscience, machine learning, AI, cognitive science, robotics, linguistics, psychiatry.
How to reintegrate these scattered items into functioning whole from which they sprang?
Again and again, integrative views of brains and cognition were advanced: behaviorism; the cybernetic brain; hyperstability; general problem solver; physical symbol systems; society of mind; synergetics; autopoietic systems; behaviorbased agents; ideomotor theory; the Bayesian brain. Yet, the very multiplicity of such paradigms attests to the perpetuity of the integration challenge.
Conceptors offer novel options to take us a few concrete steps further down the long and winding road to cognitive system integration.
Conceptors are a neurocomputational mechanism which – basic and generic like a stem cell – can differentiate into a diversity of neurocomputational functionalities: incremental learning of dynamical patterns; perceptual focussing; neural noise suppression; morphable motor pattern generation; generalizing from a few learnt prototype patterns; topdown attention control in hierarchical online dynamical pattern recognition; Boolean combination of evidence in pattern classification; contentaddressable dynamical pattern memory; pointeraddressable dynamical pattern memory (all demonstrated by simulations in [1]). In this way they suggest a common computational principle underneath a number of seemingly diverse neurocognitive phenomena.
Conceptors can be formally or computationally instantiated in several ways and on several levels of abstraction: as neural circuits, as adaptive signal filters, as linear operators in dynamical systems, as operands in an extended Boolean calculus, and as categorical objects in a logical framework (all detailed in [1]). In this way they establish new translation links between different scientific views, in particular between numericdynamical and symboliclogical accounts of neural and cognitive processing.
2 The basic mechanism
Conceptors can be intuitively explained in three steps.
Step 1: From dynamical patterns to conceptors. Consider a recurrent neural network (RNN) with neurons which is driven by several dynamical input patterns in turn. The concrete type of RNN model (spiking or not, continuous or discrete time, deterministic or stochastic) is of no concern, and the patterns may be stationary or nonstationary, scalar or multidimensional signals. When is driven with pattern , the dimensional excited neural states come to lie in a state cloud whose geometry is characteristic of the driving pattern. The simplest formal characterization of this geometry of is given by an ellipsoid whose main axes are the principal components of the state set (Figure 1). This ellipsoid represents the conceptor associated with pattern in the network . can be concretely instantiated in various ways, for instance as a matrix, as a separate subnetwork, or as a single neuron projecting to a large random “reservoir” network. In any case, can be learnt from by a variety of simple and robust learning rules which all boil down to the objective “learn a regularized identity map”.
Step 2: Storing prototype patterns. In order to realize some of the potential conceptor functionalities, the patterns must be stored in the RNN . The objective defining this storing task is that the network learns to replicate the patterndriven state sequences in the absence of the driver. This could be called a “selfsimulation” objective. It can be effected by an elementary RNN adaptation scheme which in the last few years has been independently introduced under the names of “selfprediction” (Mayer & Browne), “equilibration” (Jaeger), “reservoir regularization” (Reinhart & Steil), “selfsensing networks” (Sussillo & Abbott), and “innate training” (Laje & Buonomano). Write for the network obtained after patterns have been stored. In intuitive terms one could say that the storing procedure entrenches the various patterndriven dynamics (where ) into the network (visualized in Figure 2). However, these entrenched dynamics are inherently unstable due to crosstalk.
Step 3: From conceptors to dynamical patterns. If were just let running freely (in the absence of input), unpredictable behavior would result due to the instability of the entrenched dynamics. Now the conceptors associated with are called on stage. Assume we want the network to regenerate the dynamics associated with pattern  stably and accurately. This is achieved by inserting the corresponding conceptor into the recurrent update loop of . In mathematical abstraction, is a linear map and inserting it means to insert the operation into the recurrent state update loop. In intuitive geometric terms, the network states are filtered by the ellispoid shape of : state components aligned with the “thick” dimensions of this ellipsoid pass essentially unaltered whereas components in the “flat” directions are suppressed (Figure 3). As a result, the neural dynamics corresponding to pattern is selected and stabilized. Changing from one conceptor to another conceptor swiftly switches network dynamics from one mode to the next. The “insertion” can be mathematically, biologically or technically implemented in various ways, depending on how is concretely instantiated. Implementation options range from matrixbased conceptor filters (convenient in machine learning applications) to activating neurons which represent conceptors (in biologically more plausible “reservoir networks” realizations of conceptors) [1].
Summary – the essence of conceptor mechanisms:

Different driving patterns lead to differently shaped state clouds in a driven RNN. The ellipsoid envelopes of these clouds make conceptors.

After driving patterns have been stored in the network, they can be selected and stably regenerated by inserting the corresponding conceptor filters in the update loop.
3 A little more detail and some highlight examples
The most basic use case for conceptors is to store a number of
patterns in an RNN and later replay them: a neural longterm memory
for dynamical patterns with addressing by conceptors. Demo: 15 human
motion patterns were stored in a 1000neuron RNN. These patterns were
61dimensional signals distilled from human motion capture data
retrieved from the CMU mocap repository
(mocap.cs.cmu.edu). Some of these
patterns were periodic, others were transient. A single short training
sequence per pattern was used for training. In
order to regenerate a composite motion sequence from the network,
associated conceptors were activated in turn and the obtained network
dynamics was visualized (using the mocap visualization
toolbox from the University of Jyväskylä
www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/
mocaptoolbox ). Figure 4 shows some thumbnails. Smooth
transitions between successive motion patterns were
obtained by linearly blending the conceptor matrix into
for a one simulated second.
When some ‘‘prototype’’ patterns have been stored, they can be morphed in recall by using linear mixes of the prototype conceptors. Demo: four patterns were stored, two of which were 5periodic random patterns and the the other two were sampled irrationalperiod sines. Figure 5 shows the result of using conceptor mixtures , where . When all mixing coefficients are nonnegative one obtains an interpolation between, when some are negative one gets extrapolation beyond the stored prototypes. The four panels with bold outlines show the recalled prototypes (one of the is equal to 1, the others are 0). Note that interpolations are created even between integerperiodic and irrationalperiod signals, which in the terminology of dynamical systems correspond to attractors of incommensurable topology (point attractors vs. attractors with the topology of the unit circle).
Conceptors can be combined by operations OR (written ), AND (), NOT () which obey almost all laws of Boolean logic (for certain classes of conceptors, full Boolean logic applies) and which admit a rigorous semantical interpretation. For instance, the OR of two conceptors , which are individually derived from neural state sets , is (up to a normalization scaling) the conceptor that would be derived from the union of these two state sets. Figure 6 illustrates the geometry of these operations.
These Boolean operations furthermore induce an abstraction ordering on conceptors by defining if there exists some such that . When conceptors are represented as matrices, this logical definition of coincides with the wellknown Löwner ordering of matrices. When conceptors are represented as certain adaptive neural circuits, deciding whether computationally amounts to checking whether the activation of certain neurons increases. The extreme cheapness of this local check may give a hint why humans can often make classification judgements with so little apparent effort.
A special case of abstraction is defocussing. In geometric terms, a conceptor becomes defocussed if its ellipsoid shape is inflated by a certain scaling operation which is governed by a parameter called aperture. At zero aperture a conceptor contracts to the zero mapping, while when the aperture grows to infinity the conceptor approaches the identity mapping. The larger the aperture, the more signal components may pass through the state filtering . Demo: four different chaotic attractor patterns were stored in an RNN, one of them being the wellknown Lorenz attractor. When the conceptor corresponding to the Lorenz attractor is applied at increasing aperture levels, the regenerated pattern first goes through stages of increasing differentiation, then in a certain aperture range becomes a faithful replica of that attractor, after which it gradually becomes overexcited and at very large aperture dissolves into the entirely unconstrained behavior of the native network (Figure 7). An optimal aperture can be autonomously adjusted by the system, exploiting a cheaply measurable ‘‘autofocussing’’ criterion based on the signal damping ratio imposed by the conceptor.
With the help of Boolean operations and the abstraction ordering, a network’s conceptor repertoire can be viewed as being organized in an abstraction hierarchy which shares many formal properties with semantic networks and ontologies known from AI, linguistics and cognitive science. This line of analysis can be extended to a full account of conceptor systems in the modern categorytheoretic setting of logical frameworks, establishing a rigorous link between neural dynamics and symbolic logic [1].
Besides such uses for a scientific analysis, Boolean operations offer concrete computational exploits. One of them is incremental (lifelong) learning of dynamical patterns. The objective here is to store more and more patterns in a network such that patterns stored later do not catastrophically interfere with previously acquired ones. Let be a potentially openended series of patterns with associated conceptors . In informal terms, incremental storing can be achieved as follows. Assume the first patterns have been stored, yielding . Characterize the neural memory space claimed by these patterns by and the still free memory space by . The next pattern with its conceptor typically has some dynamical components that are shared with some of the already stored patterns, and it will have some new dynamical components. The latter can be characterized by the conceptor (logical difference operator). The storing procedure can be straightforwardly modified such that only the new dynamical components characterized by are stored into the still free memory space .
Demo: 16 patterns (some random integerperiodic patterns, some sampled sines) were incrementally stored in a 100neuron RNN. After the last one was stored, the regeneration quality of all of them was tested by conceptorcontrolled recall. Figure 8 illustrates the outcome. The memory space claimed at each stage is indicated by the red panel filling; it is measured by a normalized size of (the largest possible such is the identity conceptor with size 1). It can be seen that the network successively ‘‘fills up’’, and is essentially exhausted after pattern : the sixteenth pattern cannot be stored and its regeneration fails. Note that patterns 68 are identical to patterns 13. The incremental storing procedure automatically detects that nothing new has to be stored for = 68 and claims no additional memory space.
Another practical use of Boolean operations is in dynamical pattern classification. Again, with the aid of ‘‘Boolean learning management’’ a pattern classification system can be trained incrementally such that after it has learnt to classify patterns , it can be furthermore trained to recognize without revisiting earlier used training data. Furthermore the system can combine positive and negative evidence, motto: ‘‘this test pattern seems to be in class AND it seems NOT to be in any of the other classes OR OR OR OR OR . In the widely used Japanese vowels benchmark (admittedly not superdifficult by today’s standards), a conceptorenabled neural classifier based on an RNN with only 10 neurons easily reached the performance level of involved stateoftheart classifiers at a very low computational cost (learning time a fraction of a second on a standard notebook computer). I note in passing that patterns need not be stored in this application; the native network’s response to test patterns yields the basis for classification.
It is not always necessary to precompute conceptors and somehow store or memorize them for later use. Instead, a network can regenerate the stored patterns without precomputed conceptors by running a contentaddressing routine. To this end, at recall time the conceptor ultimately needed for regenerating is initialized to the zero conceptor. The network is then driven by a short and possibly corrupted cue version of . During this cueing phase, the zero conceptor is quickly adapted to a preliminary version of . When the cue signal expires, the network run is continued in autonomous mode with the conceptor in the loop. Its adaptation continues too. Since there is no external guide it can adapt to, it adapts to ... itself! Using a human cognition metaphor, this autoadaptation can be likened to the recognition processing triggered by a brief stimulus, for instance when one gets a passing glimpse of a face in a crowd and then in an ‘‘recognition afterglow’’ consolidates this impression to the wellknown face of a friend. In terms of conceptor geometry, the ellipsoid shape of the preliminary is ‘‘contrastenhanced’’ by autoadaptation: axes that are weak in are further diminished and eventually are entirely suppressed, while strong axes grow even stronger. Altogether in the autoadaptation phase converges toward a contrastenhanced version of . This autoadaptation dynamics has interesting and useful mathematical properties. In particular it is inherently robust against noise. In the simulations reported in [1] it functions reliably even in the presence of neural noise with signaltonoise ratios less than one. Furthermore, when the stored patterns are samples from a parametric family, the contentaddressed recall also functions when unstored members of this family are used as cue (‘‘class learning effect’’). For sufficiently large , the network has implicitly extracted the ‘‘family law’’. A mathematical and numerical investigation reveals that this class learning effect can be interpreted as the creation of an approximate plane attractor under the autoadaptation dynamics in conceptor space.
Demo: Figure 9 shows the result of a simulation study where in separate trials patterns from a 2parametric family were stored. The networks were tested with cues that corresponded to stored patterns and with cues that came from the pattern family but were not among the stored ones. For small , the stored patterns can be regenerated better than the novel ones (a rote learning effect). When the number of stored patterns exceeds a critical value, stored and novel patterns are regenerated with equal accuracy and storing even more patterns has no effect.
Altogether these contentaddressable neural memory systems can be seen in many respects as a dynamical analog of Hopfield networks, the classical model of associative memories for static patterns.
4 Conclusion
Conceptors are a mathematical, computational, and neural realization of a simple pair of ideas:

Processing modes of an RNN can be characterized by the geometries of the associated state clouds.

When the states of an RNN are filtered to remain within such a specific state cloud, the associated processing mode is selected and stabilized.
Implicit in this pair of ideas is the  notrivial  claim that a single RNN may indeed host a diversity of processing modes. This is the essence of conceptors:
Conceptors can control a multiplicity of processing modes of an RNN.
Almost all examples in this article concerned a particular type of processing mode, namely pattern generation. This bias is due to the circumstance that conceptors were first conceived in the context of a research project concerned with robot motor skills (www.amarsiproject.eu). But a network governed by conceptors can be employed in any of the sorts of tasks in which RNNs become engaged: signal prediction, filtering, classification, control, etc. In scenarios other than pattern generation it is often not necessary to store patterns in the concerned RNN  the storing procedure is not a constitutive component of conceptor theory.
Conceptors offer rich possibilities to morph, combine, and adapt an RNN’s processing modes through operations on conceptors: linear mixing, logical operations and aperture adaptation.
By virtue of logical combinations and conceptor abstraction, the processing modes of an RNN can be seen as organized in a similar way as concept hierarchies in AI formalisms. This has motivated to name these operators ‘‘conceptors’’.
In my view these are the most noteworthy concrete innovations brought about by conceptors so far:

they make it possible in the first place to characterize and govern a diversity of RNN processing modes,

they enable incremental pattern learning in RNNs with an option to quantify and monitor claimed memory capacity,

they yield a model of an autoassociative memory for dynamical patterns.
A similarly noteworthy but more abstract and epistemological innovation can be recognized in the firm link between nonlinear neural dynamics and symbolic logic, established by the dual mathematical nature of conceptors as neural state filters on the one hand and as discrete objects of logical operations on the other.
I am a machine learning researcher and this article undoubtedly reflects limits of this perspective. In all examples that I presented conceptors were derived from the simulated dynamics of simple artificial RNNs. But conceptors can be computed on the basis of any sufficiently highdimensional numerical timeseries. This indicates usages of conceptors as a tool for data analysis and interpretation in experimental disciplines. For instance, it sounds like an interesting project for an empirical cognitive neuroscientist to (i) submit a subject to a cognitive task which involves Boolean operations, (ii) record highdimensional brain activity of some sort, and (iii) check to what extent conceptors derived from those recordings reflect the Boolean relationships that are inherent in the task specification. I would actually not expect that this can be straightforwardly done with any kind of raw signals. A more insightful question is to find out which brain data recorded from where and transformed how do mirror logicoconceptual task characteristics.
I have carried out quite a number of diverse simulation experiments with conceptors. Over and over again I was impressed by the robustness of conceptor learning and operation against noise and parameter variations. Furthermore, the basic algorithms are computationally cheap. For a machine learning engineer like myself they feel like really sturdy and practical enablers for building versatile RNNbased information processing architectures. For applications in biological modeling (a field where I am no expert) I would believe that robustness and cheapness are likewise relevant.
This appetizer article certainly does not qualify as a scientific paper. A more serious account is provided by the technical report [1] (about 200 pages). Besides giving all the formal definitions, algorithms, mathematical analyses, simulation detail and references that are missing in the present article, it expands on some further topics that I did not touch upon here. Specifically, it explores the hairy issue of ‘‘biological plausibility’’ and proposes (still rather abstract) neural circuits which support conceptors and which only require local computations; it analyzes conceptor autoadaptation with tools from dynamical systems theory; it specifies a formal logic which grounds symbolic conceptor expressions in neural signal semantics; and it presents a multifunctional hierarchical neural processing architecture wherein higher processing levels inform and modulate lower processing levels through conceptors.
My personal take on real brains and really good robots is that they will never be fully explainable or designable on the basis of a single unified theory. I view conceptors as one further model mechanism which sheds some more light on some aspects of system integration in brains, animals, humans and robots.