ISSN 0236-235X (P)
ISSN 2311-2735 (E)

Publication activity

(Information on the results of 2021)
2-year impact factor of the RSCI: 0,441
2-year impact factor of the RSCI without self-citation: 0,408
The two-year impact factor of the RSCI, taking into account citations from all
sources: 0,704
5-year impact factor of the RSCI: 0,417
5-year impact factor of the RSCI without self-citation: 0,382
The total number of citations of the journal in the RSCI: 9837
Herfindahl's five-year index of quoting journals: 149
Herfindahl Index by author organizations: 384
10-year Hirsch Index: 71
Place in the overall SCIENCE INDEX ranking: 151
Place in the SCIENCE INDEX ranking on the topic "Automation. Computer technology": 6

More information on the publication activity of our journal for 2008-2021 on the RSCI website.

Bookmark

Next issue

4
Publication date:
25 December 2022

Articles of journal № 1 at 2022 year.

Order result by:
Public date | Title | Authors

1. Intelligent analysis and processing large heterogeneous data for parrying threats in complex distributed systems [№1 за 2022 год]
Authors: Brekotkina E.S. , Pavlov A.S. , S.V. Pavlov , O.I. Khristodulo
Visitors: 1332
The paper proposes a method of intelligent analysis and processing of large heterogeneous data for predicting threats in complex distributed systems. The method is based on the results of automatic monitoring of changes in water level in water bodies and air temperature at the measurement point. Such monitoring makes it possible to increase the efficiency of planning and implementing measures to fend off such and similar threats. The method is based on general approaches and mathematical models previously used by the au-thors to develop adaptive algorithms for controlling gas turbine engines. It is particularly relevant in the context of the increasingly widespread introduction of software and hardware systems for monitor-ing the state of complex distributed systems and the exponential growth in the number of data used to support decision-making. The choice of the future value of the water level at the measurement point is based on the results of processing the data accumulated over all previous flood periods on the compliance of the water level and its changes per day with the values of air temperature and its changes over the same day. The ana-lyzed data are the values of air temperature and water level measured at equidistant points in time, computational values of changes in the water level and air temperature, as well as forecast values (ac-cording to the official data of the hydrometeorological service) of changes in air temperature. Based on the calculation of the retrospective frequency of changes in this temperature and the water level at the corresponding point, it is proposed to choose as the predicted the value that corresponds to the maxi-mum frequency of occurrence of such a combination of measured parameters. The paper presents the results of an experimental assessment of the accuracy of forecasting the wa-ter level in the water bodies of the Republic of Bashkortostan in the flood period of 2021 are. They confirm the applicability of the proposed forecasting method to support decision making to fend off threats in complex distributed systems from a sharp rise in water, even with the current insufficiently automated observation system. With a wider change in highly automated software and hardware com-plexes for monitoring the flood situation, the amount of data analyzed and processed by software sig-nificantly increases, which will complicate the application of traditional methods of data use, and, on the other hand, will increase the efficiency and relevance of the method proposed in this paper.

2. Architecture of the software development and testing platform neural network models for creating specialized dictionaries [№1 за 2022 год]
Authors: Purtov D.N., I.G. Sidorkina
Visitors: 1278
The authors propose the implementation of a software platform for creating neural network models with their testing, used to create specialized dictionaries for automated systems. The software platform allows speeding up the process of finding the optimal method for creating a neural network model. The platform is based on an overview of existing tools and methods used to create clock analysis models and software virtualization technologies. A research result is the proposed architecture of a software platform for creating specialized dic-tionaries that ensures the simultaneous creation of different neural network models in virtual contain-ers. A container virtualization of software elements that create and test neural network models provides all mathematical calculations for processing text-based information; decentralized, in parallel and iso-lated training and testing a neural network model. The data exchange between virtual containers, as well as the storage of all the results of the container's operation occurs through a special data bus, which is disk space that all containers have access to. The use of the developed platform can speed up the process of searching for an algorithm for creat-ing specialized dictionaries through testing various hypotheses based on various methods for con-structing models. The process acceleration occurs due to the parallelism and reuse of the mathematical results of the general stages of algorithms whose mathematical calculations were carried out by a simi-lar algorithm. This allows scaling and splitting the learning process not only through the parallel crea-tion of various models, but also at the level of individual model creation stages. The proposed platform was successfully used to find a locally optimal method for creating a model in highly specialized lim-ited-field texts.

3. Systems and approaches for processing information represented by large dynamic graphs [№1 за 2022 год]
Author: Gulyaevsky S.E.
Visitors: 1282
The paper performes an overview of the key features and advantages for the main existing approaches and systems for processing large graphs on a personal computer. The analysis involves single PC graph processing systems such as GraphChi, TurboGraph, GraphChi-DB and distributed systems like Apache GraphX. Special attention is paid to the problems that require significant changes in the graph structure during the commutation process and the details of implementing such algorithms in graph processing systems. The conducted experiments used a well-known algorithm for network inference based on the ob-served spread of infections among the population, or the spread of news and memes in social networks. The algorithm relies on a stochastic gradient to obtain estimates of the time-varying structure and tem-poral dynamics of the proposed network. The algorithm was implemented for GraphChi and Apache Spark computations models. The authors measured the performance for various real and synthetic da-tasets, described several limitations for these computation models discovered during experiments. Computations were performed on a single computer for GraphChi, and on a cluster of various sizes for the Apache Spark based implementation. According to the results of the review and the conducted experiments, the existing systems are di-vided into three classes: fast systems with static graph partition and expensive repartition with signifi-cant structure changes; on average, slower systems that are able to handle large amounts of changes ef-ficiently; even more slower but highly scalable systems that compensate low single node performance with the ability to scale computation to a large number of nodes. The conclusion drawn from the con-ducted review and experiments shows that the problem of dynamic graphs efficient storage and pro-cessing is still not solved and requires additional research.

4. The adaptive image classification method using reinforcement learning [№1 за 2022 год]
Author: Elizarov A.A.
Visitors: 1329
The paper proposes a method for image classification that uses in addition to a basic neural network for image classification an additional neural network able to adaptively concentrate on the classified im-age object. The task of the additional network is the contextual multi-armed bandit problem, which re-duces to predicting such area on the original image, which is, when cut out of the classification process, will increase the confidence of the basic neural network that the object on the image belongs to the cor-rect class. The additional network is trained using reinforcement learning techniques and strategies for compromising between exploration and research when choosing actions to solve the contextual multi-armed bandit problem. Various experiments were carried out on a subset of the ImageNet-1K dataset to choose a neural network architecture, a reinforcement learning algorithm and a learning exploration strategy. We con-sidered reinforcement learning algorithms such as DQN, REINFORCE and A2C and learning explora-tion strategies such as -greedy, -softmax, -decay-softmax and UCB1 method. Much attention was paid to the description of the experiments performed and the substantiation of the results obtained. The paper proposes application variants of the developed method, which demonstrate an increase in the accuracy of image classification in comparison with the basic ResNet model. It additionally consid-ers the issue of the computational complexity of the developed method.

5. A formal model of multiagent systems for federated learning [№1 за 2022 год]
Authors: Yuleisy G.P., I.I. Kholod
Visitors: 1455
Recently, the concept of federated learning has been actively developing. This is due to the tightening of legislation in the field of working with personal data. Federated learning involves performing data training directly on the nodes where the data is stored. As a result, there is no need to transfer data an-ywhere, and they remain with the owners. To generalize the trained models, they are sent to the server that performs the aggregation. The concept of federated learning is very close to a multi-agent system, since agents allow training machine learning models on local devices while maintaining confidential information. The ability of agents to interact with each other makes it possible to generalize (aggregate) such models and reuse them. Taking into account the tasks that are solved by the federated learning methods, there are several learning strategies. Learning be carried out as follows: sequentially when the model is trained in turn at each node; centrally when models are trained in parallel at each node and aggregated on a central serv-er; or decentralized where training and aggregation is performed on each of the nodes. Interaction and coordination of agents should be carried out taking into account these learning strategies. This article presents a formal model of multi-agent systems for federated learning. It highlights the main types of agents required to complete the full cycle of federated learning: an agent that accepts a task from a user; an agent that collects information about the environment; an agent performing train-ing planning; an agent performing training on a data node; an agent providing information and access to data; an agent performing model aggregation. For each of them, the paper defines the main actions and types of messages exchanged by such agents. It also analyzes and describes the configurations of agent placement for each of the federated learning strategies.

6. A software package prototype for analyzing user accounts in social networks: Django web framework [№1 за 2022 год]
Authors: Oliseenko V.D., Abramov M.V. , Tulupyev A.L. , Ivanov K.A.
Visitors: 1290
The paper considers the issues implementing a prototype of a research and practical complex to auto-mate the analysis of user accounts in social networks. Such prototype is used as a tool to indirectly as-sess users’ psychological features manifestation, their vulnerabilities to social engineering attacks as well as to develop recommendations for protection against these attacks. The prototype is developed in the Python 3.8 programming language using the Django 3.1 web framework and PostgreSQL 13.2, Boot-strap 4.6. This paper aims to increase the efficiency of extracting information from data posted by users in social networks, which allows indirect assessment of psychological, behavioral and other characteris-tics of users. The goal is achieved by automating data extraction and developing tools for their analy-sis. The subject of the study is the methods of automated extraction, pre-processing, unification, and presentation of data from users' accounts in social networks to protect them against social engineering attacks. A prototype application based on the Django web framework solves the problem of automated ex-traction, preprocessing, unification, and presentation of data from user accounts in social networks. The solution of this problem is one of the essential steps to build a system for analyzing the security of users from social engineering attacks. The theoretical significance of the work is in the combination and validation through the automation of previously developed methods and approaches to recover missing values of the attributes of the account, the comparison of online social networks users' ac-counts for their belonging to the same user. The practical significance comes from the development of an application tool located on the sub-domain sea.dscs.pro, which allows performing primary analysis of users' accounts in social networks.

7. Software for solving the precedence constrained generalized traveling salesman problem [№1 за 2022 год]
Authors: Petunin A.A., Ukolov S.S., Khachay M.Yu.
Visitors: 1215
The paper considers the generalized problem of the precedence constraint traveling salesman (PCGTSP). Like the classical traveling salesman problem (TSP), the authors search a minimum cost closed cycle in this problem, while the set of vertices is divided into nonempty pairwise disjoint sub-sets that are clusters; each feasible route must visit each cluster in a single vertex. In addition, the set of valid routes is constrained by an additional restriction on the order of visiting clusters, that is, some clusters must be visited earlier than others. In contrast to the TSP and the generalized traveling sales-man problem (GTSP), this problem is poorly studied both theoretically and from the point of view of algorithm design and implementation. The paper proposes the first specialized branch-and-bound algorithms using the solutions obtained using the recently developed PCGLNS heuristic as an initial guess. The original PCGTSP problem un-dergoes several relaxations, therefore there are several lower bounds for the original problem; the larg-est of them is used to cut off the branches of the search tree and thereby reduce the enumeration. The algorithms are implemented as open source software in the Python 3 programming language using the specialized NetworkX library. The performance of the proposed algorithms is evaluated on test exam-ples from the PCGTSPLIB public library in comparison with the state-of-the-art Gurobi solver using the MILP model recently proposed by the authors, and seems to be quite competitive even in the cur-rent implementation. The developed algorithms can be used in a wide class of practical problems, for example, for opti-mal tool routing for CNC sheet cutting machines, as well as for assessing the quality of solutions ob-tained using other methods.

8. An algorithm of idiom search in program source codes using subtree counting [№1 за 2022 год]
Author: Orlov D.A.
Visitors: 1202
The paper is dedicated to programming idiom extraction algorithm design. Programming idiom is the fragment of source code which often occurs in different programs and used for solving one typical pro-gramming task. In this research the programming idiom is a source code fragment that often occurs in different programs and used for solving one typical programming task. In this research, the program-ming idiom is considered as the part of a program abstract syntax tree (AST), which provides maximum reduce of information quantity in a source code, when all of programming idiom occurrences are re-placed with certain syntax construction (e.g., function call). The developed subtree value metric estimates information amount reduce after such replace. There-fore, the idiom extraction is reduced to search of subtree value function maximum on AST subtree set. To reduce a number of subtrees inspected, the authors use steepest descent method for subtree value function maximum search. At each step subtree is extended with one node, which provides maximum increase of a subtree value metric. Subtrees are stored in a data structure that is a generalization of a trie data structure. The paper proposes an accelerated algorithm of idiom extraction. Programming idiom extraction speedup is achieved through reusing results of idiom efficiency maximum search. The paper also de-scribes the implementation of the developed algorithms. The algorithms are implemented in Python programming language. The implementation extracts programming idioms from source code written in Python. This programming language is chosen due to a large corpus of texts written in such language; it also includes convenient tools for building AST. The authors carried out an idiom extraction experiment using the developed implementation. The idioms were extracted from corpora of an open-source program source code. The extracted program-ming idioms are source code fragments with own meaning. It is also shown that applying developed al-gorithms to a source code of a single software project can reveal possibilities of investigated program refactoring.

9. An analysis of the efficiency of the process of servicing the flow of requests for creating IT-services used a simulation model [№1 за 2022 год]
Authors: Abdаlov A.V., Grishakov V.G., Loginov I.V.
Visitors: 1193
The paper discusses the issues of analyzing the effectiveness of the process of servicing the flow of requests for creating IT services using the simulation modeling method. It shows that the well-known sim- ulation tools do not allow full simulation of the application service process in information and communication in-frastructure administration units characterized by the controlled resource flow nature. The study involved the development of software to simulate the process of servicing the flow of requests for creating IT services. Its main difference was the ability to manage the resource source during the process of servic-ing requests and the possibility of simultaneous experiments on the same source data with several service disci-plines. The simulation model was developed in the Microsoft Visual Studio environment and consists of five mac-roblocks: a request generator, a resource generator, a service device, an algorithm block and an experiment block. The algorithm block allows connecting external models in the form of library blocks that implement the request flow processing through a unified interface, including the ability to generate commands to manage the resource source. The experiment block allows performing streaming experiments based on the specified settings and saved experiment files. The main difference of the developed simulation model is the creation of multiple independent request service flows for various algorithms. The possibility of conducting comparative analysis experiments is illustrated by a se-ries of experiments with stationary and non-stationary request flows and stationary, non-stationary and controlled resource flow based on a family of alternative control algorithms. The results of using a simulation model of the process of servicing requests for creating infocommunication services made it possible to evaluate the effectiveness of the developed promising control algorithms within the framework of the study.

10. The method for creating parallel software tools for modeling military complexes [№1 за 2022 год]
Author: Aksenov M.A.
Visitors: 1142
Nowadays modeling systems are actively created and used all over the world including the Armed Forces of the Russian Federation. The basis of these systems are modeling complexes, which are a set of technical and software tools providing calculations and imitation modeling. The analysis of modern software tools for modeling military complexes has shown that the duration of the cal-culations performed during imitation largely influence the efficiency of their application when used directly. Specific technological tools used in the development of parallelization of labor-intensive cyclic sections of modeling complexes allow minimizing the time spent on modeling under conditions of limited terms of using software tools. However, nowadays they are not implemented in the general software architecture of modeling complexes accepted for supply in the Armed Forces of the Russian Federation. The paper considers the issues of choosing parallelization algorithms implemented in parallel software devel-opment tools for multi-core (multiprocessor) shared memory computing systems. The purpose of the paper is to assess the impact of the execution time of parallelized cyclic sections of a target program with multithreaded parallel execution of the program in multi-core (multiprocessor) PCs on the results of combat imitation. The scientific novelty is in the development of a new method for creating parallel software tools for modeling military complexes. The paper provides numeric examples of calculations in the Mathcad. To avoid errors in choosing preferred parallelization algorithms, the entire analysis is based on mathematical statistics elements with the probability of a confidence interval for estimating the cycle execution time by a certain algorithm considering the upper limit of the confidence interval. The author proposes a variant of constructing software tools on the example of introduc-ing technological developments into a software architecture of a modeling complex.

| 1 | 2 | Next →