Learner as Autonomous Human Learning User
Notwithstanding the differences among these pre-existing Learning Theories, they share one overriding commonality. Namely there's none or very little ambiguity in who the user/ learner is.
In short, the user/ learner is a human being with full autonomy in their range of choices, actions and thinking. The closest to the idea that the user/ learner is a passive non-proactive entity is one proposed in Behaviouralism. There the learner absorbs and is conditioned by his/ her environmental queues. The learner still retains his/ her agency, albeit a seemingly unreflected form.
One might therefore summarise the learner/ user as follows:
Learner as Machine Learning Server/ Servicing/ Serving
What all these Learning Theories clearly assume is the primacy of the human user on the interface side. What they don't foreground is the underlying governance and governmentality that direct how the engines of machine learning operate on the other side of the human user interface.
If we were to step into the machine learning engines, especially the artificial intelligence (AI) used to dice and recombine the engines' libraries, services and objects, we will then see the human learner less in terms of their seeming autonomy on the human interface side. Thus on the human user level, one might assume it is the human learner who is enacting decisions (Behaviouralism), processing the cognitive load (Cognitivism), self-constructing with supported scaffolds to reach difficult knowledge frontiers (Constructivism), and connecting with others and broader communities to form networked nodes of learning (Connectivism).
The Constitution, Governance (Meta-Government) & Governmentality (Meta-Governance) of Human and Machine Learning
What these Learning Theories therefore side-step are the issues of systemic externalities. The machine learning engines can increasingly capture, track and pattern all the human users' movements and choices.
Here we're dealing with issues of disembodied security, memory and privacy, to be being with, in the big data collected from these movements and choices. The accessibility and equality issue of testing and benchmarking is still with us since the No Child Left Behind Act (military speak to highlight education is at war), albeit increasingly shifting into the machine learning engine, well away from most stakeholders' understanding.
We're also dealing with issues of learning speed. For example, Cognitivism assumes our mental processes have parallels to a desktop configuration. It's chief concern is cognitive load, namely that information and knowledge need to be broken down for processing. It's the CPU equivalent of the brain that is our main bottleneck when it comes to acquiring expertise.
Exponential Change in Machine Learning, Architectures, Embodiments & Engines verus Static Human Forms
Yet even desktop computers have moved beyond the original standalone version as suggested in Cognitivism. Anyone with their hands in the innards of the computer from 1980s to present will recall how configuring master-slave drives were manually determined with a jumper. Eventually, the sequencing of drives would then be adjusted no longer at hardware level, but rather via software.
Before long, came the server with everyone else as thin client. And the thin client was then set loose through wirelessness, giving perhaps a false impression that one is anything by a client of a distant server. Mobile computing and communications have taken this invisible server-client relationship with even more human disembodiment and mind substitution. Soon enough, most of what concerns our mind are then situated in clouds. Thus anyone who is still dealing with personal computing have shifted their processor concerns out of the desktop and into the firewall and gateway.
Computer architectures have also changed dramatically: more kernels, more buses, more cooling, more hybrid memory states, and more ram capacity just to name a few upgrades. So keeping up with Moore's Law of exponential growth capacity, it's clear that our human mind struggles to keep up with the growth of those machine transistors.
We may be still thus by far the most educated demographics in the whole history of humanity, it's clear that if our learning methodologies focus on developing speed and overcoming speed-related bottlenecks, we will still be a poor substitute/ replica of artificial machine learning.
In effect, we may have simply positioned each other to compete against one another as benchmarked by machine learning. This is a game that has similiar logic to the Arms Race that is still expanding the existing possibilities of mutually assured destruction (MAD).
Surviving the MAD Race
Of course, the internet started out is a means of surviving a nuclear attack. But paradoxically, we have reached a new dawn in terms of targetted attacks if one so wishes to invert the governmentality of these machine learning services as servers. Surveillance is a huge state and global industry.
Reconfiguring the master-slave relationship could be done so now at the speed of light. And with it, so too questions of agency and autonomy for the human user/ learner.
Limits & Strengths of Human Actors against Virtual Actors
I ask these questions not so much from Teachers' Union perspectives, though I can see that they might seize on this platform in the face of cost reductions and staff redundancy, I ask these questions from a concern of whether we may have thrown out the baby with the bathwater?
Once that baby is gone, our idea of the physical school with real teachers and students will all go the same way. I am all for augmentation and freeing teachers from routine work. One could certainly design some nifty analytics for both parents and teachers.
Theorising Sustainable Humanism & Mindful Learning
Such questions presume schools are already post-digital. Clearly they are mostly not. Most are just on the threshold of introducing ICT. Aside from that kids are already enormously tech savvy outside of school, schools do have a catch up game to play.
However we, as learning technologists, haven't factored in how attention span is being scattered and wasted. This is different from being concerned with regulating cognitive load.
At this stage, it is necessarily sketchy.
Turbo speed has left us with the loss of care. This is an ironic outcome since machine augmentation is meant to free us with more time. Instead, it has become a platform for doing more at even quicker speeds. At a certain stage, there's the loss of care and inevitable burnt out which, in turn, has to be disguised in a toxic and competitive work environment.
https://www.youtube.com/watch?v=_IeBcecwcQw
https://www.youtube.com/watch?v=2qCaW1_4LBQ