Artificial intelligence is very limited, and can't be used smoothly by anyone and anywhere. It can't predict what a human, or an animal or even a plant think in next (second, minute, hour, day..). It can't work without Algorithms.
I believe that the next step is for artificial intelligence to learn to understand human cognition (feelings, desires and intuitions). Mathematically it is possible but we still need research, I have articles on the subject, MDEI Theory
The field of artificial intelligence is shifting from massive, general-purpose models to smaller, specialized systems tailored for specific tasks. This change is driven by the need for efficiency, cost-effectiveness, and real-world usability. Rather than relying on a single AI model, businesses are adopting multi-model strategies to enhance accuracy, reduce computational costs, and integrate AI into operational environments where reliability matters most.
You are absolutely right. You have rightly pointed out the need for experts to use AI, in a very obvious manner. I believe this post should be read by those people who claim that AI will take away jobs. It still needs people who can use it, I cannot work on its own.
Thank you for your thoughtful post — these are important and widely discussed questions in both AI research and mathematics education. I’d like to respond by reframing your points through the lens of current AI capabilities, limitations, and future trajectories — all grounded in mathematical and pedagogical reasoning.
1. “AI is very limited, and can't be used smoothly by anyone and anywhere.”
You’re right — and that’s by design (for now).
AI today is largely narrow (or weak) AI — meaning it’s engineered to perform specific tasks (e.g., image recognition, language translation, recommendation systems). It’s not general-purpose like human cognition.
But “smooth use by anyone anywhere” is rapidly improving:
Democratization of AI: Tools like ChatGPT, Gemini, Copilot, and open-source models (Llama, Mistral) are lowering barriers. You don’t need to code or understand algorithms to use them.
Edge AI & Mobile AI: AI is moving to smartphones, wearables, and IoT devices — usable offline, anywhere.
Mathematics Education Angle: In classrooms, AI tutors (e.g., Khanmigo, Squirrel AI) are already adapting to individual student needs — a form of “smooth use” tailored to learners.
➡ What’s next? Ubiquitous, personalized, context-aware AI assistants — think of an AI that knows you’re a 7th grader struggling with fractions and adjusts explanations in real-time, using your learning history and emotional cues.
2. “It can't predict what a human, animal, or even a plant thinks in the next (second, minute, hour, day…).”
Also correct — but let’s unpack “think.”
Humans/animals/plants don’t “think” in the computational sense — they respond to stimuli via biological, emotional, environmental systems.
AI doesn’t “predict thoughts” — but it can predict behaviors with surprising accuracy:Recommender systems predict what video you’ll click next. Weather + soil AI predicts how a plant will grow. Neuroscience + AI models are beginning to decode brain activity patterns (e.g., reconstructing images from fMRI).
➡ What’s next? Probabilistic modeling of complex systems — using AI + mathematics (Bayesian networks, dynamical systems, causal inference) to forecast not “thoughts,” but likely responses of biological systems under given conditions.
Fun fact: Plants don’t “think,” but they do exhibit “intelligence” via signaling networks. AI is being used to model these networks — e.g., predicting how a plant will respond to drought stress.
3. “It can't work without Algorithms.”
Absolutely true — and that’s not a flaw, it’s a feature.
Everything digital runs on algorithms — your phone, your calculator, even this text. AI is no exception.
But here’s the key insight:
AI doesn’t just use algorithms — it learns them.
Traditional software: humans write explicit rules (algorithms).
Machine Learning: AI infers algorithms (models) from data.
Deep Learning: AI discovers hierarchical representations — essentially, self-generated algorithms for perception, language, etc.
What’s next? Meta-learning and algorithm discovery — AI systems that invent new algorithms or optimize existing ones autonomously. Google’s “AutoML” and AI-driven theorem provers (like AlphaGeometry) are early examples.
So… What’s NEXT for AI?
Here’s my 3-part forecast as a Math Ed :
1. From Narrow to Adaptive Intelligence
AI will become more context-aware, emotionally intelligent (affective computing), and personalized — especially in education. Imagine an AI tutor that detects frustration from your keystrokes and switches teaching strategies.
2. From Prediction to Co-Creation
AI won’t just predict — it will collaborate. In math classrooms, AI might co-solve problems with students, offering hints, visualizations, or counterexamples — like a Socratic partner.
3. From Black Box to Explainable & Ethical Frameworks
The future of AI must be transparent, especially in education. We’re developing Explainable AI (XAI) and AI Literacy curricula so students (and teachers) understand how AI works — not just how to use it.
AI is Mathematics in Action
At its core, AI is applied mathematics:
Linear algebra → neural networks
Probability → Bayesian reasoning
Calculus → gradient descent
Graph theory → knowledge representation
The “limitations” you point out are not dead ends — they’re research frontiers. And as a mathematics educator, I see AI not as a replacement for human thought, but as a cognitive partner — one that, with proper guidance, can help us think deeper, teach better, and learn smarter.