Publications by Year: 2022

2022
Li, X., Lin, C. - L., Simos, T. E., Mourtas, S. D., & Katsikis, V. N. (2022). Computation of Time-Varying {2,3}- and {2,4}-Inverses through Zeroing Neural Networks. Mathematics, 10. WebsiteAbstract
This paper investigates the problem of computing the time-varying {2,3}- and {2,4}-inverses through the zeroing neural network (ZNN) method, which is presently regarded as a state-of-the-art method for computing the time-varying matrix Moore–Penrose inverse. As a result, two new ZNN models, dubbed ZNN23I and ZNN24I, for the computation of the time-varying {2,3}- and {2,4}-inverses, respectively, are introduced, and the effectiveness of these models is evaluated. Numerical experiments investigate and confirm the efficiency of the proposed ZNN models for computing the time-varying {2,3}- and {2,4}-inverses.
Jerbi, H., Alharbi, H., Omri, M., Ladhar, L., Simos, T. E., Mourtas, S. D., & Katsikis, V. N. (2022). Towards Higher-Order Zeroing Neural Network Dynamics for Solving Time-Varying Algebraic Riccati Equations. Mathematics, 10. WebsiteAbstract
One of the most often used approaches for approximating various matrix equation problems is the hyperpower family of iterative methods with arbitrary convergence order, whereas the zeroing neural network (ZNN) is a type of neural dynamics intended for handling time-varying problems. A family of ZNN models that correlate with the hyperpower iterative methods is defined on the basis of the analogy that was discovered. These models, known as higher-order ZNN models (HOZNN), can be used to find real symmetric solutions of time-varying algebraic Riccati equations. Furthermore, a noise-handling HOZNN (NHOZNN) class of dynamical systems is introduced. The traditional ZNN and HOZNN dynamic flows are compared theoretically and numerically.
Stanimirović, P. S., Mourtas, S. D., Katsikis, V. N., Kazakovtsev, L. A., & Krutikov, V. N. (2022). Recurrent Neural Network Models Based on Optimization Methods. Mathematics, 10. WebsiteAbstract
Many researchers have addressed problems involving time-varying (TV) general linear matrix equations (GLMEs) because of their importance in science and engineering. This research discusses and solves the topic of solving TV GLME using the zeroing neural network (ZNN) design. Five new ZNN models based on novel error functions arising from gradient-descent and Newton optimization methods are presented and compared to each other and to the standard ZNN design. Pseudoinversion is involved in four proposed ZNN models, while three of them are related to Newton’s optimization method. Heterogeneous numerical examples show that all models successfully solve TV GLMEs, although their effectiveness varies and depends on the input matrix.
Mourtas, S. D., Katsikis, V. N., Drakonakis, E., & Kotsios, S. (2022). Stabilization of Stochastic Exchange Rate Dynamics Under Central Bank Intervention Using Neuronets. International Journal of Information Technology & Decision Making, 1-29. Publisher's VersionAbstract
The exchange rate dynamics affect national economies because fluctuations in currency prices distort their economic activity. To maintain an optimal exchange rate policy, these dynamics are crucial for countries with a trade economy. Due to the difficulty in predicting the participants behavior in some complex economic systems, which might throw the system into chaos, a novel stochastic exchange rate dynamics (SERD) model is introduced and investigated in this paper. Furthermore, a neural network approach is proposed and examined as a control chaos method to address the problem of stabilizing SERD through central bank interventions. Derived from power activation feed-forward neuronets, a 2-input weights-and-structure-determination-based neuronet (2I-WASDBN) model for controlling chaos in SERD under central bank intervention is presented in this paper. Six simulation experiments on stabilizing the chaotic behavior of the SERD model show that the 2I-WASDBN model outperforms other well-performing neural network models and that it is more effective than traditional methods for controlling chaos. By examining the volume of necessary intervention predicted by the 2I-WASDBN model, central banks can better comprehend exchange rate fluctuations and, in conjunction with their monetary policies, can make more precise decisions regarding the strategy of their interventions.
Kovalnogov, V. N., Fedorov, R. V., Generalov, D. A., Chukalin, A. V., Katsikis, V. N., Mourtas, S. D., & Simos, T. E. (2022). Portfolio Insurance through Error-Correction Neural Networks. Mathematics, 10. WebsiteAbstract
Minimum-cost portfolio insurance (MCPI) is a well-known investment strategy that tries to limit the losses a portfolio may incur as stocks decrease in price without requiring the portfolio manager to sell those stocks. In this research, we define and study the time-varying MCPI problem as a time-varying linear programming problem. More precisely, using real-world datasets, three different error-correction neural networks are employed to address this financial TLPtime-varying linear programming problem in continuous-time. These neural network solvers are the zeroing NNneural network (ZNN), the linear-variational-inequality primal-dual NNneural network (LVI-PDNN), and the simplified LVI-PDNN (S-LVI-PDNN). The neural network solvers are tested using real-world data on portfolios of up to 20 stocks, and the results show that they are capable of solving the financial problem efficiently, in some cases more than five times faster than traditional methods, though their accuracy declines as the size of the portfolio increases. This demonstrates the speed and accuracy of neural network solvers, showing their superiority over traditional methods in moderate-size portfolios. To promote and contend the outcomes of this research, we created two MATLAB repositories for the interested user,research, we created two MATLAB repositories, for the interested user, that are publicly accessible on GitHub.
Liao, B., Hua, C., Cao, X., Katsikis, V. N., & Li, S. (2022). Complex Noise-Resistant Zeroing Neural Network for Computing Complex Time-Dependent Lyapunov Equation. Mathematics, 10. WebsiteAbstract
Complex time-dependent Lyapunov equation (CTDLE), as an important means of stability analysis of control systems, has been extensively employed in mathematics and engineering application fields. Recursive neural networks (RNNs) have been reported as an effective method for solving CTDLE. In the previous work, zeroing neural networks (ZNNs) have been established to find the accurate solution of time-dependent Lyapunov equation (TDLE) in the noise-free conditions. However, noises are inevitable in the actual implementation process. In order to suppress the interference of various noises in practical applications, in this paper, a complex noise-resistant ZNN (CNRZNN) model is proposed and employed for the CTDLE solution. Additionally, the convergence and robustness of the CNRZNN model are analyzed and proved theoretically. For verification and comparison, three experiments and the existing noise-tolerant ZNN (NTZNN) model are introduced to investigate the effectiveness, convergence and robustness of the CNRZNN model. Compared with the NTZNN model, the CNRZNN model has more generality and stronger robustness. Specifically, the NTZNN model is a special form of the CNRZNN model, and the residual error of CNRZNN can converge rapidly and stably to order 10−5 when solving CTDLE under complex linear noises, which is much lower than order 10−1 of the NTZNN model. Analogously, under complex quadratic noises, the residual error of the CNRZNN model can converge to 2∥A∥F/ζ3 quickly and stably, while the residual error of the NTZNN model is divergent.
Simos, T. E., Katsikis, V. N., & Mourtas, S. D. (2022). A multi-input with multi-function activated weights and structure determination neuronet for classification problems and applications in firm fraud and loan approval. Applied Soft Computing, 127, 109351. WebsiteAbstract
Neuronets trained by a weights-and-structure-determination (WASD) algorithm are known to resolve the shortcomings of traditional back-propagation neuronets such as slow training speed and local minimum. A multi-input multi-function activated WASD neuronet (MMA-WASDN) model is introduced in this paper, combined with a novel multi-function activated WASD (MA-WASD) algorithm, for handling binary classification problems. Using multiple power activation functions, the MA-WASD algorithm finds the optimal weights and structure of the MMA-WASDN and uses cross-validation to address bias and prevent being stuck in local optima during the training process. As a result, neuronets trained with the MA-WASD algorithm have higher precision and accuracy than neuronets trained with traditional WASD algorithms. Applications on firm fraud and loan approval classification validate our MMA-WASDN model in order to demonstrate its outstanding learning and predicting performance. Since these applications use real-world datasets that include strings and missing values, an algorithmic method for preparing data is also suggested to make them manageable from the MMA-WASDN. A comparison of the MMA-WASDN model to five other high-performing neuronet models is included, as well as a MATLAB package that is publicly available through GitHub to support and promote the findings of this research.
Simos, T. E., Katsikis, V. N., Mourtas, S. D., & Stanimirović, P. S. (2022). Unique non-negative definite solution of the time-varying algebraic Riccati equations with applications to stabilization of LTV systems. Mathematics and Computers in Simulation, 202, 164-180. WebsiteAbstract
In the context of infinite-horizon optimal control problems, the algebraic Riccati equations (ARE) arise when the stability of linear time-varying (LTV) systems is investigated. Using the zeroing neural network (ZNN) approach to solve the time-varying eigendecomposition-based ARE (TVE-ARE) problem, the ZNN model (ZNNTVE-ARE) for solving the TVE-ARE problem is introduced as a result of this research. Since the eigendecomposition approach is employed, the ZNNTVE-ARE model is designed to produce only the unique nonnegative definite solution of the time-varying ARE (TV-ARE) problem. It is worth mentioning that this model follows the principles of the ZNN method, which converges exponentially with time to a theoretical time-varying solution. The ZNNTVE-ARE model can also produce the eigenvector solution of the continuous-time Lyapunov equation (CLE) since the Lyapunov equation is a particular case of ARE. Moreover, this paper introduces a hybrid ZNN model for stabilizing LTV systems in which the ZNNTVE-ARE model is employed to solve the continuous-time ARE (CARE) related to the optimal control law. Experiments show that the ZNNTVE-ARE and HFTZNN-LTVSS models are both effective, and that the HFTZNN-LTVSS model always provides slightly better asymptotic stability than the models from which it is derived.
Jiang, W., Lin, C. - L., Katsikis, V. N., Mourtas, S. D., Stanimirović, P. S., & Simos, T. E. (2022). Zeroing Neural Network Approaches Based on Direct and Indirect Methods for Solving the Yang–Baxter-like Matrix Equation. Mathematics, 10. WebsiteAbstract
This research introduces three novel zeroing neural network (ZNN) models for addressing the time-varying Yang–Baxter-like matrix equation (TV-YBLME) with arbitrary (regular or singular) real time-varying (TV) input matrices in continuous time. One ZNN dynamic utilizes error matrices directly arising from the equation involved in the TV-YBLME. Moreover, two ZNN models are proposed using basic properties of the YBLME, such as the splitting of the YBLME and sufficient conditions for a matrix to solve the YBLME. The Tikhonov regularization principle enables addressing the TV-YBLME with an arbitrary input real TV matrix. Numerical experiments, including nonsingular and singular TV input matrices, show that the suggested models deal effectively with the TV-YBLME.
Simos, T. E., Katsikis, V. N., Mourtas, S. D., & Stanimirović, P. S. (2022). Finite-time convergent zeroing neural network for solving time-varying algebraic Riccati equations. Journal of the Franklin Institute. WebsiteAbstract
Various forms of the algebraic Riccati equation (ARE) have been widely used to investigate the stability of nonlinear systems in the control field. In this paper, the time-varying ARE (TV-ARE) and linear time-varying (LTV) systems stabilization problems are investigated by employing the zeroing neural networks (ZNNs). In order to solve the TV-ARE problem, two models are developed, the ZNNTV-ARE model which follows the principles of the original ZNN method, and the FTZNNTV-ARE model which follows the finite-time ZNN (FTZNN) dynamical evolution. In addition, two hybrid ZNN models are proposed for the LTV systems stabilization, which combines the ZNNTV-ARE and FTZNNTV-ARE design rules. Note that instead of the infinite exponential convergence specific to the ZNNTV-ARE design, the structure of the proposed FTZNNTV-ARE dynamic is based on a new evolution formula which is able to converge to a theoretical solution in finite time. Furthermore, we are only interested in real symmetric solutions of TV-ARE, so the ZNNTV-ARE and FTZNNTV-ARE models are designed to produce such solutions. Numerical findings, one of which includes an application to LTV systems stabilization, confirm the effectiveness of the introduced dynamical evolutions.
Mourtas, S. D., & Katsikis, V. N. (2022). Exploiting the Black-Litterman framework through error-correction neural networks. Neurocomputing, 498, 43-58. WebsiteAbstract
The Black-Litterman (BL) model is a particularly essential analytical tool for effective portfolio management in financial services sector since it enables investment analysts to integrate investor views into market equilibrium returns. In this research, we define and study the continuous-time BL portfolio optimization (CTBLPO) problem as a time-varying quadratic programming (TVQP) problem. The investor’s views in the CTBLPO problem are regarded as a forecasting problem, and they are generated by a novel neural network (NN) model. More precisely, employing a novel multi-function activated by a weights-and-structure-determination for time-series (MAWTS) algorithm, a 3-layer feed-forward NN model, called MAWTSNN, is proposed for handling time-series modeling and forecasting problems. Then, using real-world datasets, the CTBLPO problem is approached by two different TVQP NN solvers. These solvers are the zeroing NN (ZNN) and the linear-variational-inequality primal–dual NN (LVI-PDNN). The experiment findings illustrate and compare the performances of the ZNN and LVI-PDNN in three various portfolio configurations, as well as indicating that the MAWTSNN is an excellent alternative to the traditional approaches. To promote and contend the outcomes of this research, we created two MATLAB repositories for the interested user, that are publicly accessible on GitHub.
Stanujkic, D., Karabasevic, D., Popovic, G., Smarandache, F., Stanimirović, P. S., Saračević, M., & Katsikis, V. N. (2022). A Single Valued Neutrosophic Extension of the Simple WISP Method. Informatica, 1–17. Vilnius University Institute of Data Science and Digital Technologies.
Mosić, D., Stanimirović, P. S., & Katsikis, V. N. (2022). Properties of the CMP inverse and its computation. Computational and Applied Mathematics, 41(4), 131. presented at the 2022. Publisher's VersionAbstract
This manuscript aims to establish various representations for the CMP inverse. Some expressions for the CMP inverse of appropriate upper block triangular matrix are developed. Successive matrix squaring algorithm and the method based on the Gauss–Jordan elimination are considered for calculating the CMP inverse. As an application, the solvability of several restricted systems of linear equations (RSoLE) is investigated in terms of the CMP inverse. Illustrative examples and examples on randomly generated large-scale matrices are presented.
Simos, T. E., Katsikis, V. N., Mourtas, S. D., Stanimirović, P. S., & Gerontitis, D. (2022). A higher-order zeroing neural network for pseudoinversion of an arbitrary time-varying matrix with applications to mobile object localization. Information Sciences, 600, 226-238. WebsiteAbstract
The hyperpower family of iterative methods with arbitrary convergence order is one of the most used methods for estimating matrix inverses and generalized inverses, whereas the zeroing neural network (ZNN) is a type of neural dynamics developed to solve time-varying problems in science and engineering. Since the discretization of ZNN dynamics leads to the Newton iterative method for solving the matrix inversion and generalized inversion, this study proposes and investigates a family of ZNN dynamical models known as higher-order ZNN (HOZNN) models, which are defined on the basis of correlation with hyperpower iterations of arbitrary order. Because the HOZNN dynamical system requires error function powers, it is only applicable to square error functions. In this paper, we extend the original HOZNN dynamic flows to arbitrary time-dependent real matrices, both square and rectangular, and sign-bi-power activation is used to investigate the finite-time convergence of arbitrary order HOZNN dynamics. The proposed models are theoretically and numerically tested under three activation functions, and an application in solving the angle-of-arrival (AoA) localization problem demonstrates the effectiveness of the proposed design.
Kornilova, M., Kovalnogov, V., Fedorov, R., Zamaleev, M., Katsikis, V. N., Mourtas, S. D., & Simos, T. E. (2022). Zeroing Neural Network for Pseudoinversion of an Arbitrary Time-Varying Matrix Based on Singular Value Decomposition. Mathematics, 10. WebsiteAbstract
Many researchers have investigated the time-varying (TV) matrix pseudoinverse problem in recent years, for its importance in addressing TV problems in science and engineering. In this paper, the problem of calculating the inverse or pseudoinverse of an arbitrary TV real matrix is considered and addressed using the singular value decomposition (SVD) and the zeroing neural network (ZNN) approaches. Since SVD is frequently used to compute the inverse or pseudoinverse of a matrix, this research proposes a new ZNN model based on the SVD method as well as the technique of Tikhonov regularization, for solving the problem in continuous time. Numerical experiments, involving the pseudoinversion of square, rectangular, singular, and nonsingular input matrices, indicate that the proposed models are effective for solving the problem of the inversion or pseudoinversion of time varying matrices.
Khan, A. T., Cao, X., Brajevic, I., Stanimirovic, P. S., Katsikis, V. N., & Li, S. (2022). Non-linear Activated Beetle Antennae Search: A novel technique for non-convex tax-aware portfolio optimization problem. Expert Systems with Applications, 116631. WebsiteAbstract
The non-convex tax-aware portfolio optimization problem is traditionally approximated as a convex problem, which compromises the quality of the solution and converges to a local-minima instead of global minima. In this paper, we proposed a non-deterministic meta-heuristic algorithm called Non-linear Activated Beetle Antennae Search (NABAS). NABAS explores the search space at the given gradient estimate measure until it is smaller than a threshold known as “Activation Threshold”, which increases its convergence rate and avoids local minima. To test the validity of NABAS, we formulated an optimization-based tax-aware portfolio problem. The objective is to maximize the profit and minimize the risk and tax liabilities and fulfill other constraints. We collected stock data of 20 companies from the NASDAQ stock market and performed a simulation using MATLAB. A comprehensive comparison is made with BAS, PSO, and GA algorithms. The results also showed that a better-optimized portfolio is achieved with a non-convex problem than a convex problem.
Mourtas, S. D., Katsikis, V. N., & Kasimis, C. (2022). Feedback Control Systems Stabilization Using a Bio-inspired Neural Network. EAI Endorsed Transactions on AI and Robotics, 1, 1–13. presented at the Feb. Publisher's Version
Khan, A. T., Cao, X., Li, S., Katsikis, V. N., Brajevic, I., & Stanimirovic, P. S. (2022). Fraud detection in publicly traded U.S firms using Beetle Antennae Search: A machine learning approach. Expert Systems with Applications, 191, 116148. WebsiteAbstract
In this paper, we present a fraud detection framework for publicly traded firms using an optimization approach integrated with a meta-heuristic algorithm known as Beetle Antennae Search (BAS). Existing techniques include human resources, like financial experts and audit teams, to determine the ambiguities or financial frauds in the companies based on financial and non-financial ratios. It is a laborious task, time-consuming, and prone to errors. We designed an optimization problem to minimize the loss function based on a non-linear decision function combined with the maximization of recall (Sensitivity and Specificity). We solved the optimization problem iteratively using the BAS. It is a nature-inspired algorithm and mimics the beetle’s food-searching nature. It includes a single searching particle to find an optimal solution to the optimization problem in n-dimensional space. We used a benchmark dataset collected from SEC’s Accounting and Auditing Enforcement Releases (AAERs) for the simulation. It includes 28 raw financial variables and the data collected between 1991–2008. For the comparison, we evaluated the performance of BAS with the recently proposed approach using RUSBoost. We also compared it with some additional algorithms, i.e., Logit and SVM-FK. The results showed that BAS is comparable with these algorithms and outperformed them in time consumption.
Katsikis, V. N., Stanimirovic, P. S., Mourtas, S. D., Li, S., & Cao, X. (2022). Towards Higher Order Dynamical Systems (Book Chapter). In I. Kyrchei (Ed.), Generalized Inverses - Algorithms and Applications (1st ed., pp. 207-239). Nova Science Publications. Website
Katsikis, V. N., Mourtas, S. D., Stanimirović, P. S., Li, S., & Cao, X. (2022). Time-varying mean–variance portfolio selection problem solving via LVI-PDNN. Computers and Operations Research, 138, 105582. presented at the 2022. Publisher's VersionAbstract
It is widely acclaimed that the Markowitz mean–variance portfolio selection is a very important investment strategy. One approach to solving the static mean–variance portfolio selection (MVPS) problem is based on the usage of quadratic programming (QP) methods. In this article, we define and study the time-varying mean–variance portfolio selection (TV-MVPS) problem both in the cases of a fixed target portfolio’s expected return and for all possible portfolio’s expected returns as a time-varying quadratic programming (TVQP) problem. The TV-MVPS also comprises the properties of a moving average. These properties make the TV-MVPS an even greater analysis tool suitable to evaluate investments and identify trading opportunities across a continuous-time period. Using an originally developed linear-variational-inequality primal–dual neural network (LVI-PDNN), we also provide an online solution to the static QP problem. To the best of our knowledge, this is an innovative approach that incorporates robust neural network techniques to provide an online, thus more realistic, solution to the TV-MVPS problem. In this way, we present an online solution to a time-varying financial problem while eliminating static method limitations. It has been shown that when applied simultaneously to TVQP problems subject to equality, inequality and boundary constraints, the LVI-PDNN approaches the theoretical solution. Our approach is also verified by numerical experiments and computer simulations as an excellent alternative to conventional MATLAB methods.