Wednesday, 9 July 2025
No-code ignorance
Friday, 13 September 2024
Generative AI and innovation
An interesting way to find out if the outcome of your research is innovative is to ask AI. Whenever I ask an LLM about my research findings and methods, I get the respond that my methods don’t follow some mainstream approach and I should be aligned with common practice. Then I know I am on the right track and I probably have developed something prototype or even innovative.
AI currently cannot “think” out of the box and only reproduces known facts and of course hasn’t overcome yet human intelligence and we can sleep peacefully at night; for now.
Tuesday, 16 January 2024
Can we use AI to solve NP-Hard problems efficiently?
The growing research interest in Artificial Intelligence (AI) has led to
the development of a plethora of methods and techniques that address difficult computational
problems. The impressive progress on the field reasonably generates the
question of whether we could use AI methods in order to solve efficiently traditionally
hard problems or even to answer the NP vs P question.
These days the discussion on the possible relation of AI and NP-hardness revolves around two topics.
- Whether AI can resolve the NP vs P question
- If NP=P this will boost AI algorithms
On the
first topic and at the time that this post is written, AI does not exhibit synthetic thinking
so it is currently impossible to answer important problems as this. The second
is really doubtful any way.
Let’s discuss a more practical aspect of the problem. AI methods deviate from traditional deterministic algorithms and in many cases overcome inherent obstacles of traditional computing. Then, could AI methods provide efficient solutions for NP-Hard problems?
There is
huge literature on AI assisted approximation methods on NP-Hard problems that provide
good and practical solutions for these problems sometimes achieving near
optimal solutions for really hard instances of NP-Hard problems. So why not use
them for actually solving them and provide exact solutions?
This reasonable question actually raises even more problems. Unlike traditional computing where Alan Turing solved the problem of modeling computation, AI methods lack general modeling of their operation. For some Machine Learning methods the researchers approach them experimentally and exhibit empirical evidences for their robustness without solid theoretical foundations.
The intense research in these issues gradually fills some of these gaps. In section 4 of our latest paper we provide a modeling framework for the operation of Genetic Algorithms (GA). Then we prove that GA may not solve efficiently classes of problems that include NP-Hard problems in every case; while they are pretty efficient on solving problems in P.
At least for now, it seems that AI can provide fast and practical approximate solutions for hard problems. But there is some pessimisms when we seek for exact solutions and confront NP-Hardness.
Tuesday, 5 September 2023
Modeling and tuning genetic algorithms
Here is our latest work that presents a solution on the problem of Service Chain Embedding. It is based on genetic algorithms and extends a previously published conference paper.
Sunday, 3 September 2023
Distributed Unsupervised Deep Learning
Our recently published paper, available here in open access mode, presents a deep learning method for network resource orchestration. There are a few features that make this method interesting.
It is build on a distributed multi-agent architecture.
It is based on Unsupervised Deep Learning, not a common feature for resource orchestration methods. The user essentially defines an objective and the agents try to accomplish it by training and then running deep neural networks and without further interaction with the user.
The agents share among them the most efficient models making the training process more efficient.
The neural networks are trained using genetic algorithms which is an innovative feature for unsupervised learning systems and speeds up the training procedure. It is actually interesting to use one system in order to train another system without explicitly describing the training process.
We are able to test this method by running simulations in large scale topologies. For this we have built an efficient network simulator resealed as an open source project.
https://doi.org/10.1109/ACCESS.2023.3308492