Dear Researchers,

I’m Ziad, and I’ve been working on a project I believe is now ready to be shared with the broader scientific and technical community.

About four years ago, I designed a multilingual programming language, just before the rise of generative AI and LLMs. As these technologies emerged, I quickly integrated generative capabilities into the language — allowing developers to write instructions in natural human language. These were then transformed into formal source code before being tokenized, parsed, and executed.

However, through experimentation and deeper exploration, I realized that simply layering human language on top of programming constructs didn’t truly add something new. Shortly after releasing early beta versions with natural language support, I noticed that most users preferred using AI agents (like code assistants or LLMs) that already offered a more fluid and helpful experience. This made the feature redundant — it didn’t generalize well, lacked transparency and precision, and ultimately failed to compete with existing tools.

So, I shifted direction and started a new side project focused more deeply on AI systems that can self-evolve.

Here's what I’ve built so far:

  • An Custom Intention Recognition NN that grows as the user uses the model
  • A Local LLM that's used for interaction
  • A custom intermediate representation (IR) — a structured language that breaks tasks into modular steps that are easy for the LLM to generate.
  • A parser and executor that translates this IR into Python code, enabling flexible automation.
  • An AI model trained to understand and generate these IR nodes for skills and tasks based on task intent — effectively learning simple programs like "add two numbers" or "sort a list" on its own.
  • A mechanism for the model to improve through self-correction and introspection — forming the foundation of self-evolving software.

As far as I know, most current AI agents are narrowly specialized and operate within predefined boundaries. The idea of a general-purpose agent that can learn how to perform entirely new tasks — and autonomously expand its own codebase by composing intermediate representation (IR) skill nodes — has largely remained a theoretical/hypothetical concept.

My question is :

Although the system shows promising results, I didn’t invent fundamentally new technologies — I built upon existing techniques and integrated them in a novel way. Would a contribution like this still be considered worthy of academic publication?

Best regards, Ziad Rabea

More Ziad Rabea's questions See All
Similar questions and discussions