Tuesday, 2 December 2025

Wanted




 

Thursday, 20 November 2025

Structured mathematical proofs

Here is a lecture on how to write structured proofs for mathematical theorems, by Leslie Lamport.

I have started adopting this writing style when I write papers and I think it is really helpful and makes the paper more comprehensible and elegant.

 


 

Sunday, 13 July 2025

Comparing papers on Context-Awareness

In 2018 a paper I wrote on context-awareness was published. I asked Gemini to compare it against the dominant paper on the field and the result is very interesting.

me: 
Compare the seminal paper of Day on context awareness with Rodis's relevant paper 

Gemini:

When comparing the seminal work of Dey on context awareness to the relevant work of Rodis, we observe a progression from foundational definitions and conceptual models to more formal, mathematically grounded approaches and their application in distributed systems.

Dey's Seminal Work on Context Awareness:

The seminal paper in context awareness is often attributed to Dey and Abowd, particularly their work around the early 2000s (e.g., Dey & Abowd, "Towards a Better Understanding of Context and Context-Awareness," 2000 or 2001, and "Understanding and Using Context," 2001).

  • Key Contributions:

    • Definition of Context: Dey provided a widely adopted definition of context as "any information that can be used to characterize the situation of an entity." An entity can be a person, place, or object relevant to the interaction between a user and an application.

    • Definition of Context-Awareness: A system is context-aware if it uses context to provide relevant information and/or services to the user, where relevancy depends on the user's task.

    • Conceptual Framework: Dey's work focused on developing conceptual models and tools to support the rapid development of context-aware applications. This helped designers understand what constitutes context and how it can be used effectively.

    • Focus on User and Application: The emphasis was on how context can enhance human-computer interaction by making applications more adaptable and user-centric, often within the realm of ubiquitous computing.

    • Practical Implications: Dey's work highlighted the challenges in building context-aware applications, such as the need for robust context acquisition mechanisms and methods for interpreting sensed data to infer higher-level contextual information.

Rodis's Relevant Work on Context Awareness:

Panteleimon Rodis's work, particularly his paper "On defining and modeling context-awareness," offers a more theoretical and mathematical approach to the subject, building upon the earlier conceptualizations.

  • Key Contributions:

    • Mathematical Foundations: Rodis aims to provide a more rigorous mathematical framework for defining and modeling context-awareness. He draws upon the theory of computation, extending classical computational models (like the Turing Machine) to incorporate interaction and networking capabilities.

    • Formal Definitions: While acknowledging existing definitions (like Dey's), Rodis seeks to provide more precise and less vague definitions of context and context-awareness, addressing potential ambiguities in plain language descriptions.

    • Connection to Distributed Computation: A significant aspect of Rodis's work is the investigation of the relationship between modern context-aware systems and distributed computation. He emphasizes the growing relevance of cloud services and networked software components, arguing that a robust modeling framework must consider distributed aspects.

    • Modeling Methodology: Rodis proposes a modeling methodology for context-aware and distributed systems, providing tools for mapping the structure and functionality of such systems within a formal computational framework.

    • System-Centric vs. User-Centric: Rodis discusses the different perspectives (user-centric, data-driven, system-centric) in defining context and how these choices influence the generality and restrictiveness of the resulting models. His work leans towards providing a more general, foundational understanding.

Comparison and Evolution:

  1. Scope and Abstraction Level:

    • Dey: Provided a foundational, widely accessible conceptual framework and definitions that greatly influenced the field's early development, particularly in human-computer interaction and ubiquitous computing. His work is more descriptive and user-focused.

    • Rodis: Offers a more abstract, formal, and mathematically grounded approach. His work delves into the computational underpinnings and implications for distributed systems, aiming for a more rigorous and less ambiguous definition.

  2. Methodology:

    • Dey: Primarily focused on conceptual models, practical considerations for application development, and empirical investigation of interaction design.

    • Rodis: Employs theoretical computer science, particularly computational models, to build a formal framework for understanding and modeling context.

  3. Emphasis:

    • Dey: Emphasized the "what" and "why" of context awareness – what it is and why it's useful for users and applications.

    • Rodis: Focuses more on the "how" from a computational perspective – how context can be formally defined, modeled, and managed within complex, distributed systems.

  4. Target Audience/Application:

    • Dey: His work is highly influential for designers, researchers, and developers in HCI, mobile computing, and ubiquitous computing, providing practical guidance for building context-aware systems.

    • Rodis: His work would be of greater interest to researchers and theoreticians in computer science, particularly those working on foundational aspects of distributed systems, formal methods, and the mathematical modeling of intelligent environments.

In essence, Dey laid the crucial groundwork by defining the core concepts and highlighting their practical significance, effectively kicking off the field of context-aware computing. Rodis, building upon this foundation, seeks to provide a deeper, more formal, and computationally robust understanding, addressing the complexities that arise as context-aware systems become more distributed and integrated into broader computational infrastructures. Their works represent different, yet complementary, stages in the maturation of context-awareness research.


Wednesday, 9 July 2025

No-code ignorance

Lately, I see that platforms offering no-code AI have begun to appear. The issue with it is that we already have trouble understanding how complex AI models work and moving to no-code paradigms will just broaden our ignorance.

Friday, 13 September 2024

Generative AI and innovation

An interesting way to find out if the outcome of your research is innovative is to ask AI. Whenever I ask an LLM about my research findings and methods, I get the respond that my methods don’t follow some mainstream approach and I should be aligned with common practice. Then I know I am on the right track and I probably have developed something prototype or even innovative. 


AI currently cannot “think” out of the box and only reproduces known facts and of course hasn’t overcome yet human intelligence and we can sleep peacefully at night; for now.

Tuesday, 16 January 2024

Can we use AI to solve NP-Hard problems efficiently?

The growing research interest in Artificial Intelligence (AI) has led to the development of a plethora of methods and techniques that address difficult computational problems. The impressive progress on the field reasonably generates the question of whether we could use AI methods in order to solve efficiently traditionally hard problems or even to answer the NP vs P question.

These days the discussion on the possible relation of AI and NP-hardness revolves around two topics.

  • Whether AI can resolve the NP vs P question
  • If NP=P this will boost AI algorithms

On the first topic and at the time that this post is written, AI does not exhibit synthetic thinking so it is currently impossible to answer important problems as this. The second is really doubtful any way.

Let’s discuss a more practical aspect of the problem. AI methods deviate from traditional deterministic algorithms and in many cases overcome inherent obstacles of traditional computing. Then, could AI methods provide efficient solutions for NP-Hard problems?

There is huge literature on AI assisted approximation methods on NP-Hard problems that provide good and practical solutions for these problems sometimes achieving near optimal solutions for really hard instances of NP-Hard problems. So why not use them for actually solving them and provide exact solutions?

This reasonable question actually raises even more problems. Unlike traditional computing where Alan Turing solved the problem of modeling computation, AI methods lack general modeling of their operation. For some Machine Learning methods the researchers approach them experimentally and exhibit empirical evidences for their robustness without solid theoretical foundations.

The intense research in these issues gradually fills some of these gaps. In section 4 of our latest paper we provide a modeling framework for the operation of Genetic Algorithms (GA). Then we prove that GA may not solve efficiently classes of problems that include NP-Hard problems in every case; while they are pretty efficient on solving problems in P.

At least for now, it seems that AI can provide fast and practical approximate solutions for hard problems. But there is some pessimisms when we seek for exact solutions and confront NP-Hardness.

Tuesday, 5 September 2023

Modeling and tuning genetic algorithms

Here is our latest work that presents a solution on the problem of Service Chain Embedding. It is based on genetic algorithms and extends a previously published conference paper.


The interesting contribution of this paper is a modeling framework for the operation of genetic algorithms. Using this framework we prove that NP-hard problems are not computed efficiently by genetic algorithms and we define some properties for the problems that genetic algorithms compute efficiently. 

Another interesting contribution of this paper is a performance optimization mechanism for genetic algorithms which is also based on genetic computing. So you use one genetic algorithm in order to optimize the performance of another.


Sunday, 3 September 2023

Distributed Unsupervised Deep Learning

Our recently published paper, available here in open access mode, presents a deep learning method for network resource orchestration. There are a few features that make this method interesting.

It is build on a distributed multi-agent architecture.

It is based on Unsupervised Deep Learning, not a common feature for resource orchestration methods. The user essentially defines an objective and the agents try to accomplish it by training and then running deep neural networks and without further interaction with the user.

The agents share among them the most efficient models making the training process more efficient.

The neural networks are trained using genetic algorithms which is an innovative feature for unsupervised learning systems and speeds up the training procedure. It is actually interesting to use one system in order to train another system without explicitly describing the training process.

We are able to test this method by running simulations in large scale topologies. For this we have built an efficient network simulator resealed as an open source project.


https://doi.org/10.1109/ACCESS.2023.3308492




 

Wednesday, 23 August 2023

An application for reading BIG files

There are many times that I want to read a big text file but current text viewers cannot handle it. 
So I decided to develop an application that would put me out of this trouble. 
It is open source and reseased in this repository

Wednesday, 29 March 2023

On the Generative AI rush

It is weird how people react these days about Generative AI, especially ChatGPT. It was released a few months ago and many people already have developed an addiction on it while many others hate it or are afraid of it passionately.

How can serious professionals depend on a chat bot on their daily work? From my perspective they are either not serious or not professionals. Using such a great tool is reasonable but suddenly to depend on it out of the blue is  not rational. When the service stopped a few days ago some people were panicked. Really???

It is actually a great tool and has many programming potentials. The statement that these capabilities will some day signal the end of programming is not accurate. Trivial programming has passed away long ago. There are many tools and websites that actually help you build applications in seconds, it is not only chatGPT. But really innovative programming cannot be created by such chat bots that only use the knowledge that they have from their training and cannot, at least yet, create knowledge and science.

About the fear on the use of AI. Well, as any other technological achievement it can be used for the good or the bad, for the best or the worst of humanity. You can use a knife to cook or kill, on the same spirit you can use AI to detect and treat cancer or manipulate the elections.



Friday, 10 February 2023

About chatGPT

These days the talk of the town is ChatGPT and its potential uses. It really is a great tool and it impressed me, and everybody else, with the way it responds to simple language questions.
Of course it has some flaws. There is some criticism about its math capabilities or the accuracy of some information it provides, like historical dates and paper citations. But this doesn't really bother me; if I wanted math calculations I would use a calculator and for dates and historical events I would prefer Wikipedia. There are more fundamental concerns about the use of chatGPT, especially about its potential use in research as many people claim (or fear) that generative AI could replace original research. Well, not yet.
The current model of chatGPT doesn't have critical "thinking" and has troubles on reasoning and induction. I asked it some questions about how certain AI methods could be combined on an innovative way, like genetic algorithms and unsupervised learning. As there are no references of such combinations, chatGPT discouraged this idea. I would expect an answer that would reason for or against this idea based on the features of these two methods.
Then I thought of asking a stupid and easy question, but it still failed. The question was why my favourite football team cannot win Champions League. The obvious question is that it is a weak team and cannot face the competition. But the model responded that it's just a language model and cannot answer. I had to guide it through some more questions to make it answer correctly.
The cause for these weaknesses must be the architecture of the model. Probably more interconnections among the layers of the model will improve its ability to reason and combine previous knowledge. But this is more of future goal, currently it cannot replace original research and we should probably cross check its quite interesting answers.

Tuesday, 1 November 2022

Java programming is environmental friendly

I recently came across this paper, which provides experimental evidence that show that programming with different languages has different environmental footprint. It is reasonable to say that this is true, although I really believe that no one has every consider this while programming. 
The good news is that I don't have to give up my Java habits, as Java has a good environmental footprint. Also I can find in this study one more excuse to avoid Python, it has a bad environmental footprint.
Of course Python is great but old habits die hard.

Friday, 26 August 2022

On the order of executing genetic procedures

The classical approach on genetic algorithm design is to generate an initial population and then in every generation to execute Selection, Crossover and then Mutation; in that particular order. I often place Selection last, after Crossover and Mutation. There are pretty good reasons for following any of the two approaches and the answer to the question of which forms the best strategy is as usually case dependant.

By placing Selection first in order, you actually make a cleaning of the population from weak solutions and then go on with the rest of the procedures. If the population contains lots of invalid or bad solutions without any hope that they will generate a good solution via Crossover or Mutation then this strategy is profitable. But in case the initial population covers a small part of the solution space then starting with Selection limits the possibilities of reaching a goal solution.

Placing Selection last enables the search of a larger part of the solution space and finally choose the best of them. Especially on NP-hard problems where the solution space is large, this strategy enables first the exploration of a wide part of the space and then the cleanup which is a more fruitful strategy.






Tuesday, 21 December 2021

Edge Vector representation

Edge Vector representation is a novel method of representing graphs. It was introduced recently in the paper that I presented in ISCC ’21; also available here. The advantage of this representation is the requirement in memory usage which is minimum, in comparison to competition. Also, encoding a graph in Edge Vector or decoding the graph elements from the representation is efficient with polynomial complexity.

The open source code of the implementation of the method used in the paper may be found in my GtiHUB repository

https://rodispantelis.github.io/EdgeVector/

I believe you will find it useful.

Sunday, 3 October 2021

Designing Genetic Algorithms

Recently I presented a conference paper in ISCC 2021, it is on the Service Function Chain Embedding problem; find it in IEEEexplore or here. The problem is handled efficiently using a genetic algorithm. Here are a few empirical remarks about how to design efficient and effective GAs which I gained from working on this paper.
The issue with designing GAs is that their operation is not sufficiently explained and the efficiency of any GA design is case dependent. Any innovation applied on a GA implementation may work well on a problem and fail on many others.


At first there is the population generation. The objective of a GA is to approximate the goal solution of a problem among all the possible solutions in the solution space of the problem. So, from the members of the population we have to be able to generate the goal solution by applying the genetic procedures on the population. Then the initial population will have to be directed to the part of the solution space that contains the goal and for this reason we have examine whether it is more efficient to generate the population heuristically instead of a randomized generation which is the common practice.


Preserving original chromosomes and best solutions. The procedures of crossover and mutation were originally designed so as to affect the members of the initial population. During crossover, the parent chromosomes generate offspring that may replace them in the population. Also during mutation the genotype of a chromosome is affected and this results the replacement of the original solution that the chromosome represents by a new solution that the mutated chromosome represents. Moreover, the randomized operations of the GA may reject or replace a good solution.
Having these in mind, my proposition is to preserve the best solution generated during a generation regardless of how the new population will be formed. Also the generation of new offspring and new mutated chromosomes should not replace the previous ones. Add the new chromosomes in the population along with the older ones and let the selection procedure decide which will survive in the next generation.


Premature convergence.
There are many reasons for the premature convergence of the population in an undesirable solution. One technique of limiting this phenomenon and achieving a more stable behavior for the algorithm is the multiple execution of the algorithm and the further procedure of all the outputs.  The outputted solutions from all the executions maybe combined so as to generate a probably better solution (this was my approach) or you may just pick the best one for the final output.


Parameter tuning. GAs are multiparametric algorithms and the values of these parameters determine their performance; the parameters are the number of generations, population size, crossover and mutation probabilities. There are two ways to determine the best valuations for these parameters. Either by extensive experimentation or by using an optimization procedure like the one described in my earlier post.

Monday, 21 June 2021

How to tune up the parameters of a multiparametric algorithm

Genetic algorithms are multiparametric procedures. Their operations depend on a variety of parameters these usually are the size of the initial population, the number of generations, probability thresholds that define the crossover and mutation procedures and sometimes heuristics that some developers use. One of the open problems on genetic algorithms is the determination of the optimal values for these parameters. It is also called parameter tuning.
Like genetic algorithms, there are other multiparametric were the determination of the optimal values for their parameters is crucial for their operation. In all the above cases, sensitivity analysis is used for reaching optimality.
As parameter tuning is actually an optimization problem and genetic algorithms are optimization techniques, I thought it would be interesting to build a genetic algorithm for parameter tuning. The result of this work may be found here and it is released as an open source project.
The tuning algorithm examines the program that implements some multiparametric algorithm as a black box. The internal operations of the program are not examined; it only considers the output of the program given some valuation on its parameters. The population of the tuning algorithm consists of set of such valuations. It is an effective approach as I have tested it extensively.
It is a very interesting approach as we do not have to consider the functionality of the program under study, as the tuning algorithm adapts its functionality on the program. And this is the essence of artificial intelligence; the algorithms have to adapt to their subject of study.
When the program under study implements a genetic algorithm then we have a genetic algorithm that tunes up another genetic algorithm, which is a cool idea.
Of course the tuning algorithm is also multiparametric, but in it we do not seek optimality. It is enough to set some high values on its parameters, wait some time until it terminates and compute the optimal values for the program. Then run the program with optimal performance.

 

 https://github.com/rodispantelis/GeneticAlgorithms