Dear Researchers,
I’m Ziad, and I’ve been working on a project I believe is now ready to be shared with the broader scientific and technical community.
About four years ago, I designed a multilingual programming language, just before the rise of generative AI and LLMs. As these technologies emerged, I quickly integrated generative capabilities into the language — allowing developers to write instructions in natural human language. These were then transformed into formal source code before being tokenized, parsed, and executed.
However, through experimentation and deeper exploration, I realized that simply layering human language on top of programming constructs didn’t truly add something new. Shortly after releasing early beta versions with natural language support, I noticed that most users preferred using AI agents (like code assistants or LLMs) that already offered a more fluid and helpful experience. This made the feature redundant — it didn’t generalize well, lacked transparency and precision, and ultimately failed to compete with existing tools.
So, I shifted direction and started a new side project focused more deeply on AI systems that can self-evolve.
Here's what I’ve built so far:
As far as I know, most current AI agents are narrowly specialized and operate within predefined boundaries. The idea of a general-purpose agent that can learn how to perform entirely new tasks — and autonomously expand its own codebase by composing intermediate representation (IR) skill nodes — has largely remained a theoretical/hypothetical concept.
My question is :
Although the system shows promising results, I didn’t invent fundamentally new technologies — I built upon existing techniques and integrated them in a novel way. Would a contribution like this still be considered worthy of academic publication?
Best regards, Ziad Rabea