Publication activity
(Information on the results of 2021)
2-year impact factor of the RSCI: 0,441
2-year impact factor of the RSCI without self-citation: 0,408
The two-year impact factor of the RSCI, taking into account citations from all
sources: 0,704
5-year impact factor of the RSCI: 0,417
5-year impact factor of the RSCI without self-citation: 0,382
The total number of citations of the journal in the RSCI: 9837
Herfindahl's five-year index of quoting journals: 149
Herfindahl Index by author organizations: 384
10-year Hirsch Index: 71
Place in the overall SCIENCE INDEX ranking: 196
Place in the SCIENCE INDEX ranking on the topic "Automation. Computer technology": 4
Place in the SCIENCE INDEX ranking on the topic "Cybernetics": 2
More information on the publication activity of our journal for 2008-2021 on the RSCI website.
Bookmark
Next issue
Journal articles №2 2017
1. Statement of the problem of formation of directions of development of organizational automated systems and its solution algorithm [№2 за 2017 год]Authors: V.L. Lyaskovsky (l_vik_l@mail.ru) - Research Institute of Information Technologies (Professor), Ph.D; I.B. Bresler (niiit@niiit.tver.ru) - Research Institute of Information Technologies (Associate Professor, Director General), Ph.D; M.A. Alasheev (niiit@niiit.tver.ru) - Research Institute of Information Technologies (specialist), Ph.D;
Abstract: The article discusses the statement of the problem of formation of directions for the development of organizational automated information processing and control systems and its solution algorithm. It is necessary to solve this problem due to the fact that many automated systems are created and operated for decades, while the requirements for these systems change over time. Therefore, there is a need to form solutions in order to bring an automated system in compliance with new requirements from time to time. The main performance indicator of generated solutions is a complex index of the degree of automation of functional processes implemented in the system. The constraints are mandatory requirements for automation of the most important functional processes and timeliness of their implementation, as well as the maximum permissible financial and time resources for development of an automated system. The analysis of the algorithmic complexity of the problem solution shows the impossibility of its solution by considering all possible options due to exponential dependence of the number of decisions on the dimension on the source data. Therefore, the authors have developed a heuristic algorithm to reduce the number of options under consideration and rational solution to obtain a relatively small computational complexity. The proposed algorithm allows getting the decision on the development and production of complex automation equipment for an automated control system, as well as extending the life of existing automation. This algorithm is expected to be implemented in an automated decision support system, which functions as software on a consumer-grade personal computer.
Keywords: an automated system issue, , automated control system design, life cycle of automated control systems, automation facilities set
Visitors: 7639
2. Formation of Vietnam energy development options by combinatorial modeling methods [№2 за 2017 год]
Authors: Edelev А.V. (flower@isem.sei.irk.ru) - Melentiev Energy Systems Institute SB RAS, Ph.D; V.I. Zorkaltsev (zork@isem.irk.ru) - Melentiev Energy Systems Institute SB RAS (Chief Researcher), Ph.D; Đoàn Văn Bình (doanbinh@ies.vast.vn) - Institute of Energy Science of Vietnamese Academy of Science and Technology (Director), Ph.D; Nguyễn Hoài Nam (nhnam@ies.vast.vn) - Institute of Energy Science of Vietnamese Academy of Science and Technology (Head of Laboratory);
Abstract: The article describes the combinatorial modelling approach to the research on energy sector development. The idea of approach is to model a system development in the form of a directed graph with nodes corresponding to the possible states of a system at certain moments of time and arcs characterizing the possibility of transitions from one state to another. The combinatorial modelling is a visual representation of dynamic discrete alternatives. It permits to simulate the long-term process of system development at various possible external and internal conditions, to determine an optimal development strategy of the system under study. The formation and analysis procedures of energy development options are implemented in the Corrective software package. The distributed computing environment are necessary to compute an energy sector development graph. In 2015, the Institute of Energy Science of the Vietnamese Academy of Science and Technology performed a study of Vietnam sustainable energy development from 2015 to 2030. Data of this study show application of the combinatorial modelling methods to formation and analysis of Vietnam energy development options taking into account energy security requirements. The created Vietnam energy sector development graph consists of 531 442 nodes. It is computed in the cluster located at the Institute for System Dynamics and Control Theory of the Siberian Branch of the Russian Academy of Science (Irkutsk). The found optimal way of Vietnam sustainable energy development provides minimum costs of energy sector development and operation.
Keywords: combinatorial modelling, energy development, fuel and energy sector, decision support, distributed computing environment
Visitors: 7086
3. The empirical risk minimization principle based on average loss aggregating functions for regression problems [№2 за 2017 год]
Authors: Z.M. Shibzukhov (szport@gmail.ru) - Institute of Applied Mathematics and Automation (Leading Researcher), Ph.D; D.P. Dimitrichenko (dimdp@rambler.ru) - Institute of Applied Mathematics and Automation (Senior Researcher), Ph.D; M.A. Kazakov (f_wolfgang@mail.ru) - Institute of Applied Mathematics and Automation (Junior Researcher);
Abstract: The paper proposes an extended principle of empirical risk minimization to solve the regression problem. It is based on using aggregate functions instead of arithmetic mean to calculate risk. This can be justified if the loss distribution of emissions is significant or distorted, causing a shift in the risk assessment of the average loss from the very beginning. Therefore, in such cases, when optimizing characteristics in the regression problem the robust estimate of average value-at-risk should be initially used. Such intermediate risk assessment can be constructed using avg functions, which are the solution to the problem of penalty function minimization in case of mean deviation. This approach allows, on one hand, to determine a much broader class of secondary functions, and, on the other hand, to determine the average differentiable functions that approximate the average non-differentiable functions, such as a median or quintile. As a result, it is possible to construct gradient methods for solving the regression problem that, in a sense, can approximate robust techniques such as Least Median and Least Quantile. This paper proposes a new gradient scheme for solving the minimization problem of the intermediate risk. It is an analog of the used in the SAG algorithm circuit when the risk is calculated by arithmetic mean. An illustrative example presents the construction of robust procedures for characteristics assessment in a linear regression based on the use of the avg function, which approximates the median.
Keywords: aggregation function, aggregation operation, empirical risk, regression, penalty function, gradient descent procedure
Visitors: 6133
4. A contracted representation of strong associative rules in data analysis [№2 за 2017 год]
Authors: Bykova V.V. (bykvalen@mail.ru) - Siberian Federal University (Professor), Ph.D; Kataeva A.V. (kataeva_av@mail.ru) - Krasnoyarsk Regional Clinical Hospital (Engineer-Programmer);
Abstract: Modern methods and means of searching for association rules in big data lead to a significant number of rules, many of which are redundant. Redundant association rules are generally of no value, but they can misinform. To solve this problem, the paper proposes an algorithm MClose, which is a modification of the algorithm Close. It is known that Close algorithm might help to construct mini-max basis for strict association rules (association rules with the confidence of 1). Mini-max basis consists of only min-max association rules. Association rules with minimal antecedent and maximal consequent are called min-max association rules. Such rules are interesting for experts. However, mini-max basis may contain redundant association rules. The algorithm MClose immediately eliminates redundant association rules when creating mini-max basis. The resulting basis is called concise strong basis (CSB). Redundant association rules might always be obtained from the CSB without sacrificing their support and confidence, without references to the data set. Algorithm MClose is based on Galois connection. MClose algorithm is also based on derivability, which are similar on Armstrong axioms for functional dependencies. Experiments have shown that running time of algorithm MClose is comparable with the algorithm Close. However, it reduces the number of association rules mini-max basis about twice. We provide a description of the program which presents MClose and Close algorithms.
Keywords: data analysis, galois connection, closed sets, association rules, non-redundancy, concise strong basis
Visitors: 8542
5. Effective algorithm for constructing associative rules [№2 за 2017 год]
Authors: Billig V.A. (Vladimir-Billig@yandex.ru) - Tver State University, Ph.D;
Abstract: Constructing associative rules is one of the most important algorithms for extracting knowledge from databases. All modern algorithms are somehow connected with Apriori algorithm proposed in R. Agrawal’s and his co-authors’ works published more than 20 years ago and now considered classical. The known effective implementations of the algorithm are connected with database compression and presentation of data structure as a tree, which allows effective evaluation of support and other characteristics of associative rules. The proposed ConApriori algorithm does not use the above idea. We regard database transactions as enumeration given by a scale. This allows instant calculation of the basic algorithm operation determining whether some set is a subset of another set or not. The calculations are reduced to several logical computer commands. Enumeration also allows us to treat the transaction in the internal presentation as a number, preserving the meaning of the transaction elements in their external presentation at the same time. Another idea used in the algorithm allows us to construct most of the confident rules on the basis of those previously built. This article provides evidence for correctness of the algorithm as well as evaluate its complexity. We analyze its effectiveness compared with other known algorithm implementations. Possibility of parallelization is also considered. Keywords: data mining, apriori, associative rules, support, confidence, lift, enumeration, scale, database, transaction,
Keywords: data mining, apriori, associative rules, support, confidence, lift, enumeration, scale, database, transaction, knowledge discovery, parallel computation, correctness, complexity, anti-monotony
Visitors: 9495
6. Parallel computing when implementing web-based pattern recognition tools based on use case methods [№2 за 2017 год]
Authors: Fomin V.V. (v_v_fomin@mail.ru) - Herzen State Pedagogical University of Russia (Professor), Ph.D; I.V. Aleksandrov (chrono555@yandex.ru ) - Herzen State Pedagogical University of Russia;
Abstract: The article considers a software solution aimed at improving image recognition quality and effectiveness of machine learning tools using grid technologies. The paper formulates strategic directions for pattern recognition tools as a software system based on the principles of distributed systems, parallelism and adaptive configuration of computing re-sources. The article considers the structure of web tools for pattern recognition using the concept of the algorithm library. It also describes the algorithmic solutions for parallelizing learning and recognition algorithms on the basis of classical data mining algorithms that are proved themselves in practice. Such algorithms include precedent (case-based reasoning) methods or methods based on proximity metrics. They have a great potential to parallelize computational processes and develop parallel algorithms for their implementation. The search for ways to improve the productivity of computers, especially when implementing web tools based on re-source-intensive computer algorithms for machine recognition and prediction, led to the decision to create a grid system. The architecture and implementation of the grid system considered in the article assumes parallelization and organization of distributed computations on a multi-machine basis using Internet technologies. It allows obtaining similar computing power as multiprocessor computer systems but the cost is much lower. The article considers the problem of increasing the efficiency of computing resources with the possibility of reconfiguring the structure of Internet connections, including the procedure of configuring a computer network structure, connectable communication channels and dedicated servers depending on the original algorithms and data. The paper presents dependencies of operation execution time parameters on the service discipline that adapts the system to user requests. Here the tasks are ranked according to resource intensity and computing power corresponding to their rank is allocated to them.
Keywords: : intelligent information systems, pattern recognition, machine learning, web systems, parallel computing
Visitors: 9426
7. A homogeneous distribution problem based on ant colony adaptive behavioor models [№2 за 2017 год]
Authors: B.K. Lebedev (lebedev.b.k@gmail.com) - Institute of Computer Technology and Information Security, Southern Federal University (Professor), Ph.D; O.B. Lebedev (lebedev.ob@mail.ru) - Institute of Computer Technology and Information Security, Southern Federal University (Associate Professor), Ph.D; E.M. Lebedeva (lebedeva.el.m@mail.ru ) - Institute of Computer Technology and Information Security, Southern Federal University;
Abstract: The paper proposes a solution of a homogeneous distribution problem. It gives the problem statement, describes the main groups of algorithms to solve it (approximate and exact) and their advantages and disadvantages. The paper proposes a new paradigm of combinatorial optimization, which is based on modeling the adaptive behavior of an ant colony. The solution of a homogeneous distribution problem is its graphical representation as a bipartite graph. New decision mechanisms were proposed to solve these problems. The basis of metaheuristics of an ant colony algorithm is a combination of two techniques. The first basic technique is to perform the search for the best solution using an ant colony adaptive behavior. An ant builds a specific solution using a built-in procedure, which is based on a constructive algorithm. A bipartite graph that is built on the solution search graph is the main difference of the proposed ant algorithm from the existing canonical paradigm. When finding optimal solutions for optimization problems, which allow presenting solutions in the form of bipartite graphs, this approach will be fairly effective. The conducted researches showed that the ant algorithm gives more qualitative solutions in comparison with the known algorithms. Comparing known and developed algorithms, we can say that the results improved by 3–4 %.
Keywords: assignment problem, bipartite graph, optimisation, swarm intelligence, ant colony, adaptive behavior, homogeneous distribution proble
Visitors: 8170
8. Automated data processing system in UNIX-like systems [№2 за 2017 год]
Authors: E.V. Palchevsky (teelxp@inbox.ru) - Financial University under the Government of the Russian Federation; A.R. Khalikov (khalikov.albert.r@gmail.com) - Ufa State Aviation Technical University (Associate Professor), Ph.D;
Abstract: The article considers development of distributed and modular information processing in the automated mode. Implementation of this project allows accepting input and output data on a physical server up to 2,2 GB/s in volume while distributing stream information (all incoming network traffic on the server) on physical and logical kernels. The paper shows a load dependence of physical resources on the input information. It also justifies feasibility of the developed hardware and software SDP (Speed data processing) use. There is also a structure and basic operation diagrams. The first stage of creating a complex is the development of an algorithm. The second stage includes its technical implementation. The article gives a fragment of a source code, which is responsible for notification messages on an email address as CPU load, and the main launched processes. There is a description of the main functionality with the following data: a function name, a function purpose, theoretical loading, data transfer limit (in MB/s) and a result of execution. The third stage includes testing SDP complex. Its average daily results are provided within ten days. The created hardware and software complex allows processing input and output information effectively in an automatic mode in order to increase capacity of reception and sending data in MySQL DBMS, including cases with DoS- and the DDoS-attacks. One of the parts of the complex is WEB-module, which is responsible for control, both as from a personal computer, as well as from a mobile phone. The possibility of notifying about a physical server usage by sms is realized in a monitoring part of the WEB module. The developed hardware and software complex showed high stability when handling data in large volumes with a minimum load on a computer.
Keywords: information processing, load on computing resources, data structure, lowering of information flow loading, information flows, sms notification, information processing system, data-processing centre
Visitors: 8337
9. Generalized periodic motions of dynamical and non-autonomous periodic systems [№2 за 2017 год]
Authors: A.P. Afanasiev (apa@isa.ru) - 1 Institute for Information Transmission Problems of the Russian Academy of Sciences, IITP RAS, Lomonosov Moscow State University (Professor), Ph.D; S.M. Dzyuba (sdzyuba@mail.ru) - Tver State Technical University, Ph.D; I.I. Emelyanova (emelyanova-123@yandex.ru) - Tver State Technical University;
Abstract: This review represents the history of study analysis of the situation of typical behavior of dynamical and non-autonomous periodic system motions. The necessity of this investigation is due to the fact that the complete and detailed description of a typical behavior situation makes it possible to solve the problem of construction of generalized periodic motions of dynamical and non-autonomous periodic systems. The necessity of numerical investigation of indicated systems is due to the fact that the vast majority of real technological, biological, economical and others process models are described by these specific systems. In the autonomous case a generalized periodic motion is equivalent to a classical recurrent motion, which was introduced and investigated by G. Birkhoff. The concept of the recurrent motion is related to the concept of a minimal set. These two concepts define the situation of typical behavior of classical dynamical systems. The construction and investigation of recurrent motions and minimal sets took on special significance due to the requirements of chaotic dynamics and the hyperbolic theory. However, until quite recently there were no general methods of constructing recurrent motions and minimal sets. It came down to constructing attractors of separate systems of differential equations with a polylinear right part. The discovery of the generalized periodic motion concept resulted in creation of the general method of construction and investigation of all minimal sets contained in limit sets of dynamical systems. Furthermore, the concept of a generalized periodic motion allowed adapting all the basic definitions of the classical dynamical system theory to non-autonomous periodic systems and describing the situation of typical behavior in such systems on common ground. Therefore, it allows adapting directly the method of recurrent motion construction to the construction of generalized periodic motions of non-autonomous periodic systems that made the numerical construction of such motions on common ground possible.
Keywords: dynamical and non-autonomous periodic systems, situation of typical behavior, generalized-periodic motions, construction of generalized-periodic motions
Visitors: 3820
10. Evaluating the effectiveness of sustainability problem solving methods of distributed information system functioning [№2 за 2017 год]
Author: Yesikov D.O. (mcgeen4@gmail.com) - Tula State University;
Abstract: To make informed decisions regarding the organization of data storage and processing to ensure the sustainability of distributed information systems the paper proposes to use a complex of mathematical models for optimizing the dis-tribution of functional task software elements over network nodes; for optimizing the distribution of information resources by data storage and processing centers; for optimizing technical means of data storage and processing system; for optimizing the allocation of resources by information storage and processing centers. It is shown that these problems are related to the class of optimization problems with discrete Boolean variables. The paper proposes and experimentally verifies the branch-and-bound method and the method of genetic algorithms to solve formalized problems. In order to improve the efficiency of the branch-and-bound method the paper proposes preliminary determination of the variable branching order using an approximation method for solving the dual problem once. The article shows an experimental verification of the effectiveness of the branch-and-bound method for solving problems of ensuring distributed information system sustainability including the use of the algorithm of predetermined order of branching variables. The author estimates the initial data influence on overall performance of the branch-and-bound method. The paper shows the most effective strategies of variables branching to solve the designed tasks. There are variations of major operators, as well as initialization circuits of a genetic algorithm initial population to solve the problems of ensuring distributed information system sustainability. To improve the quality of the solution, which is obtained using a genetic algorithm, the author justifies the use of an adaptive scheme of individuals’ reproduction and an computational island circuit. He also experimentally checks the effectiveness of the proposed genetic and island genetic algorithms. The paper determines the parameters of genetic algorithms for ensuring maximum quality of the solutions and proves ability to manage the accuracy of the obtained solution by changing the algorithm parameters when introducing time restrictions for solving. There is a comparative assessment of the branch-and-bound method and the island genetic algorithm for solving formal problems. The paper also shows areas of their effective application.
Keywords: distributed information system, operational sustainability, mathematical models, branch-and-bound method, dual problem, generic algorithm, island genetic algorithm, adaptive reproduction scheme
Visitors: 9079
| 1 | 2 | 3 | Next → ►