Immanuel Kant, in his late 18th century classic, the Critique of Pure Reason, concluded that human cognition takes place on two levels. Level one consists of an initial organization of raw sense materials into geometric figures located in space and time which is then followed, on level two, by a determination of just how these objects and their changes in time relate to one another causally. On the side, Kant also argued that the mental capabilities used to generate results at both levels were the very same capabilities used by human beings to formulate propositions and make inferences about the world.
Various 20th century philosophers have echoed Kant’s Age of Enlightenment reflections, most notably Wilfred Sellars who railed against the ‘Myth of the Given’, i.e., the idea that raw sense materials lacking any kind of analytical pre-structuring are considered and reasoned about directly. Even given Kant’s influence on modernity, however, it is still interesting to note that many of today’s cognitive scientists and AI practitioners – particularly those working in the field of machine vision – deploy two-tiered models that depend, first, upon mapping of a pixel array into a shape filled three-dimensional space which then must be followed by algorithmic attempts to determine just what kinds of objects those shapes represent.
Although we don’t have the space here to work through all of the arguments, I am inclined to think that Kant and the cognitive scientists have hit on something which is not just true of the processes that govern human cognition but rather reflects the deep structure of any process that seeks to turn volumes of raw, noisy data into information capable of grounding action taken by human beings or machines. Not only does this include most operational technology sense and response systems, it also includes, and perhaps in an exemplary way, the analytics-enhanced performance and event monitoring technology increasingly used by Infrastructure and Application Management (IAM) professionals.
Now, such Kantian processes would be two-tiered. At the first tier, data streams (or stores but the IT wide imperative to reduce latencies of all sorts means streams will increasingly become the paradigm here) would be consumed and transformed into information about objects and their basic relationships to one another in space and time; then, at the second tier, the information is further analyzed with the tasks of sorting objects into types and type hierarchies and establishing causal pathways among the objects becoming paramount.
In terms of analytics-enhanced monitoring system architectures, this means enterprises will, in the long run, deploy functionality on two layers: first, a Complex Event Processing (CEP) or Stream Database (SDB) platform to read and convert packets and other data flows into information about the spatio-temporal state of the systems being monitored, maybe along with some basic space-time data grounded statistical correlations, and, second, an array of causal and type pattern discovery engines that work on the information generated by the first layer. One sign that this is a correct prognostication will be the success of alliances between performance monitoring vendors delivering CEP or SDB functionality (e.g., ExtraHop, Nastel, Optier) with vendors that focus on discovering causal patterns in existing data sets (e.g., Prelert, Verdande, Netuititve.)
It may be worth noting, before I close today, that if it is indeed the case that two tiers are essential to the success of an analytics-enhanced monitoring process, we have a good explanation for why neural networks failed as a technology in the IAM market, despite early signs of promise. At the end of the day, neural network algorithms did organize data into patterns but the patterns were essentially single layered and, as a result, were insufficiently textured or modular to be used by either human or machine. But more on this topic some other time.