ISSN 0236-235X (P)
ISSN 2311-2735 (E)

Bookmark

Next issue

3
Publication date:
16 September 2019
-->

Journal articles №2 2019

1. The quantum genetic algorithm in the problems of intelligent control modeling and supercomputing [№2 за 2019 год]
Authors: Ulyanov S.V. (ulyanovsv@mail.ru) - Dubna Internacional University for Nature, Socitty and Man, Ph.D; N.V. Ryabov (ryabov_nv95@mail.ru) - Dubna State University – Institute of System Analysis and Control;
Abstract: This paper considers the use of the quantum genetic algorithm for automatic selection of the optimum type and kind of correlation in the quantum structure of fuzzy inference. When solving intelligent and cognitive control tasks based on quantum soft computing and the principles of quantum deep machine learning, it is important to choose the type and kind of quantum correlation. It is an additional physical and informational computing resource in the formation of the laws of the time variation of the gains of traditional regulators located at the lower (performing) level of the intelligent control system structure. This approach is essential for the realization of adaptive and self-organizing processes of knowledge bases and guaranteed to achieve the control objectives under contingency control situations. Successful solution of the problem of choosing the type and kind of quantum correlations allows strengthening the successful search for solutions of algorithmically un-solvable problems at the classical control level. A genetic algorithm is a powerful computational intelligence toolkit for random searching of effec-tive solutions for poorly formalized tasks. However, it has a big disadvantage when used on a classic computer: low speed and dependence on the expert’s choice of a decision-making space. The paper describes the types of quantum genetic algorithms based on a combination of quantum and classical calculations, and an algorithm consisting only of quantum calculations. In such algorithm, a population can be composed of only one chromosome in a state of superposition. Immersion in the quantum structure of the fuzzy inference quantum genetic algorithm provides a synergistic effect and allows realizing quantum fuzzy inference on a classical processor. The new effect is based on the quantum genetic algorithm extracting information hidden in the clas-sical state laws change over time the gains of traditional regulators on a new unexpected situation con-trol. Such synergistic effect is possible only with end-to-end intellectual information technology of quantum computing and is absent at the classical level of application of the classical computing tech-nology.
Keywords: quantum computing, quantum genetic algorithm, quantum oracle, simulator, quantum fuzzy inference
Visitors: 352

2. Implementing metalinguistic abstraction to support OOP using C [№2 за 2019 год]
Authors: A.M. Dergachev (nmtkeshelashvili@corp.ifmo.ru) - The National Research University of Information Technologies, Mechanics and Optic (Associate Professor), Ph.D; I.O. Zhirkov (igorjirkov@gmail.com) - The National Research University of Information Technologies, Mechanics and Optics (tutor); I.P. Loginov (ivan.p.loginov@gmail.com) - The National Research University of Information Technologies, Mechanics and Optics; Yu.D. Korenkov (ged.yuko@gmail.com ) - The National Research University of Information Technologies, Mechanics and Optics;
Abstract: The paper shows the use of higher order macro definitions to support the object-oriented programming paradigm in C89 without extensions. Choosing the right programming style is important prior to writing a code. A large class of problems is described using object-oriented programming style. Many main-stream programming languages such as C++, C# or Java provide support for this programming style. However, it is not always possible to utilize these languages as the required development software such as compilers for some platforms might not be available. A typical example of this situation is Applica-tion-Specific Instruction-set Processor (ASIP), which is provided with a C compiler. The smaller set of C language features and its low-level nature allow quick and cheap compiler implementation. At the same time, the C preprocessor can be used for a sophisticated logic generation that goes far beyond simple parameterized substitutions. This paper presents an internal support of the object-oriented programming style implemented in C89 without language extensions via an extensive usage of higher-order macro definitions. The exam-ple code shows the implementation of encapsulation, inheritance and polymorphism principles. Encap-sulation syntactically prohibits accessing private fields and methods in compile time. We pay special attention to type-safety of generated code: the inheritance implementation does not weaken the already weak static typing used in C. The results of this work can be used to construct object-oriented programs using only C89 compiler in case the usage of object-oriented languages is impossible.
Keywords: c, preprocessor, object(oriented programming, metaprogramming, macro definition
Visitors: 333

3. Transformation of data from heterogeneous monitoring systems [№2 за 2019 год]
Author: Ya.A. Bekeneva (yana.barc@mail.ru ) - St. Petersburg Electrotechnical University "LETI";
Abstract: The paper presents an approach to the preparation of data obtained from heterogeneous monitoring systems for their further analysis by data mining methods. The main problem of data analysis in moni-toring various processes is the difference in the description of events for different types of sources, in-cluding a data presentation format. In addition, one event might be described using data from different monitoring systems. The paper presents a formal model of the analyzed process, describes the main problems of analyz-ing heterogeneous data, and highlights formal criteria for assigning records from different sources to a single event. In the proposed approach, a source of data is not only real-time records from various monitoring systems, but also account databases used for storing information. The main idea is that moving objects of different types can perform actions as a unit within the framework of the task being studied (for example, a vehicle and a driver). Account systems allow finding relationships between such moving objects and thereby increase the accuracy of combining records related to one event. The proposed approach has been tested on real data obtained from an enterprise. After applying all described transformations, it has become possible to significantly reduce the excess dimension of an aggregate data table, as well as significantly reduce the number of missing values. When data analy-sis was difficult due to their different formats, such data were brought to a single format and presented in the form of a single table that is convenient for further research using data mining methods.
Keywords: data format, events, attributes, data transformations, monitoring systems, heterogeneous sources
Visitors: 291

4. Development of the case-based reasoning module for identification of acoustic-emission monitoring signals of complex technical objects [№2 за 2019 год]
Authors: Varshavskiy P.R. (VarshavskyPR@mpei.ru) - National Research University “MPEI”, Ph.D; Alekhin R.V. (r.alekhin@gmail.com) - National Research University “MPEI”; A.V. Kozhevnikov (antoko@yandex.ru) - Nationa Research University “MPEI”;
Abstract: The paper examines important issues of developing a module for identifying signals obtained during acoustic emission monitoring of complex technical objects using case-based reasoning (CBR). Case-based methods and systems are used to solve a number of artificial intelligence problems (for example, for modeling plausible reasoning (common sense reasoning), machine learning, intellectual decision support, intelligent information search, data mining and etc.). The storage and analysis of acoustic emission monitoring data of complex technical objects in digi-tal form made it possible to ensure the required speed and multivariate data processing, which the pa-per technology could not provide. As the amount of heterogeneous data grows, the amount of work for an operator in qualitative analyzing has increased. To improve the efficiency of an operator, it is pro-posed to solve the problem of distributing and identifying the acoustic emission monitoring data by CBR tools. The CBR module for identifying acoustic emission signals has been developed in C# using MS Vis-ual Studio. In order to evaluate the effectiveness of the proposed solutions, the paper shows the results of computational experiments on real expert data obtained from acoustic emission monitoring of metal constructions.
Keywords: acoustic emission, data analysis, the automated information system, case-based approach
Visitors: 346

5. Optimization of the control initialization periodicity based on duplicated computing [№2 за 2019 год]
Authors: V.A. Bogatyrev (vladimir.bogatyrev@gmail.com ) - The National Research University of Information Technologies, Mechanics and Optics (Professor), Ph.D; D.E. Lisichkin (slayjoker@mail.ru) - The National Research University of Information Technologies, Mechanics and Optics;
Abstract: The paper considers a duplicated computing system equipped with the means of operational and test control. The effectiveness of failure detection system is determined by the completeness of the opera-tional control and the frequency of test control. Reduction of control periodicity intervals decreases system readiness due to increasing time costs for testing, but at the same time it increases its safety as a result of decreasing of system functioning probability in the states of undetected failures. In systems with duplicating of computer nodes, load-sharing modes are possible, when nodes inde-pendently perform a shared query thread between them, as well as the mode of duplicated calculations, when each query is simultaneously performed by two computer nodes when comparing the results at control points. There is a potential for duplicated systems with load sharing to improve control efficiency after a periodic transition into a duplicated calculations mode with a comparison of results. This allows reduc-ing costs for a test control (of a duplicated systems), and initiates it only when the results of duplicate calculations disagree. The work objective is to determine optimal intervals of transition into the mode of duplicated calcu-lations to ensure the maximum probability of system readiness for safe execution of functional re-quests while minimizing downtime and service delays. The authors propose a Markov model for determining the probability of system states, including the system readiness for safe operation, downtime and dangerous undetected failure states. Based on the proposed model, the paper analyzes the influence of the initialization periodicity of the mode of duplicated calculations on the readiness of the system for safe operation. It shows the existence of an optimal initialization frequency of the mode of duplicated computations, which enable the probability system readiness for safe operation to achieve maximum while minimizing system downtime.
Keywords: Markov’s model, control, duplicated calculations, readiness, reliability, optimality, testing
Visitors: 272

6. Item-based recommender system with statistical learning for unauthorized customers [№2 за 2019 год]
Authors: A.V. Filipyev (avfilipev@gmail.com) - Dubna State University, Institute of the system analysis and management (Assistant);
Abstract: The paper aims to reveal that using statistical learning approaches for recommender systems makes personal communication with customers better than the expert opinion regarding this question does. The author uses a cosine similarity distance as a basis for developing a machine learning recommenda-tion model. However, this distance has high calculation costs, therefore the paper considers the ways of solving this problem. The probability matrix of purchasing one item with another was calculated in or-der to weight cosine similarity and to avoid the situation when unpopular products are put at the top of a recommendation list. A weighted sum model joins cosine similarity and probability matrices and buildes recommendation sequences. User-based collaborative filtering is the most popular algorithm to build personal recom-mendation. However, it is useless when it is impossible to identify a user in the system. The developed algorithm based on cosine similarity distances, probability matrix and weighted sums allows building an item-to-item recommendation model. The main idea of this approach is to offer additional products to clients when only products in a cart are known. The item-to-item recommendation algorithm has shown advantages of using statistical machine learning approaches in order to improve communication with clients through a mobile application and a website. An integrated recommendation module has re-vealed that developing a data-driven culture is a right way of many modern companies.
Keywords: machine learning, statistical learning, weighted sum model, probabilities, cosine similarity distance, cross selling, recommendation system
Visitors: 392

7. A method of situational forecasting of the emergence of novel Industry 4.0 technologies [№2 за 2019 год]
Authors: A.M. Andreev (arkandreev@gmail.com) - Bauman Moscow State Technical University (Associate Professor), Ph.D; D.V. Berezkin (berezkind@bmstu.ru) - Bauman Moscow State Technical University (Associate Professor), Ph.D; I.A. Kozlov (kozlovilya89@gmail.com) - Bauman Moscow State Technical University (Junior Researcher);
Abstract: The paper considers the problem of automated forecasting of the emergence and development of inno-vative technologies based on Big Data Streams analysis. It shows that such forecasting is significant due to Industry 4.0. The authors analyze the existing approaches to forecasting, determine their ad-vantages and shortcomings taking into account the specifics of the task and Big Data features. It is proposed to solve the problem using the hybrid approach to data stream analysis developed by the authors. The approach allows automatic monitoring and forecasting the development of situations based on processing streams of heterogeneous data represented by text documents, numerical series, and records in databases. The process of data stream analysis includes detecting events, forming situa-tions, identifying possible scenarios of their further development and preparing proposals for decision makers. The authors describe event models that are used for processing streams of textual and structured da-ta. The incremental clustering method detects IT events in text documents flows. This method is also utilized in the processing of structured data stream to form situational chains reflecting the develop-ment of innovative technologies over time. The method for forming scenarios of the further develop-ment of the analyzed innovation technology is based on the principle of historical analogy. The proposed method allows determining the most probable scenario using logistic regression, as well as identifying the most optimistic and pessimistic scenarios via the Analytic Hierarchy Process method. The authors describe a way to supplement each scenario with recommendations for decision makers regarding the measures that should be taken to facilitate or hinder the development of technol-ogy according to this scenario. The paper provides the examples of situations detected in textual and structured data flows, as well as an example of scenarios and recommendations generated for one of the situations.
Keywords: fourth industrial revolution, industry 4.0, situational analysis, forecasting, decision support system, scenario analysis, clusterization
Visitors: 288

8. Social features of mobile application development [№2 за 2019 год]
Authors: A.I. Mostyaev (reistlin12@gmail.com) - Lomonosov Moscow State University, Faculty of Computational Mathematics and Cybernetics;
Abstract: Modern mobile application development technologies evolved at an unprecedented rate seeking for newer and newer user demands. Developers are working hard to not fall behind and try to maintain the popularity of their applications in all kinds of ways, introducing new amazing features and options. The paper describes the most common features of mobile applications and their support comparing with desktop analogs. Considering these features during mobile application development and maintenance should eliminate a misunderstanding between mobile users and mobile developers. It will be usefull for both sides. The paper starts with a quick overview of the mobile application history, giving a picture of the evolution speed in the industry. Further, it describes the most common mobile application features. A special attention is given to both technical details and usability of applications. The following features are marked: close integration with an operating system, short sessions, internet service integration and variety of mobile devices. The development features are a special life cycle and integration of third-party internet services. A special attention is paid to the quality of application localization, localization features for some countries and working with text and visual data in application stores. In conclusion, the paper gives a list of requirements to a modern successful application. The author also mentiones an interesting fact that development features of these applications are related to the cur-rent social trends.
Keywords: application support, application development, application architecture, cross-platform applications, mobile application
Visitors: 223

9. Forecasting the state of a technical object using machine learning methods [№2 за 2019 год]
Authors: Klyachkin V.N. (v_kl@mail.ru) - Ulyanovsk State Technical University (Professor), Ph.D; D.A. Zhukov (zh.dimka17@mail.ru) - Ulyanovsk State Technical University;
Abstract: State identification of a technical object during its operation enables early detection of malfunction and in-service repair. The diagnostics is frequently confined to splitting object states into two classes: a healthy and failure state. When solving this problem, it is possible to use machine learning methods for binary classification. The basic data in this paper are the known results (precedents) of a system state evaluation: the technical system is nonfaulty or faulty with predetermined values of monitored indicators. There are many different approaches to binary classification. They are classical statistical models, methods fo-cusing on machine learning, composite methods and others. In order to improve quality of forecasting, it is appropriate to use an aggregated approach that is a combination of several classification methods. The program developed in Matlab allows forecasting a system state by its predetermined operation indicators. The user may select a validation set volume, a learning method, and recognition quality cri-teria. The authors have conducted a numerical study on two examples. The evaluation of a hydraulic unit good condition has taken into account a vibration stability criterion according to the results of moni-toring sensors installed in various places. The aggregated classifier which includes gradient boosting and logistic regression showed the best result. In analysis of water treatment system in respect to drink-ing water quality, the maximum F-criterion value was when aggregating a neural network and bagging of decision trees.
Keywords: technical diagnostics, binary text classification, aggregated approach, matlab, hydroelectric set, water treatment system, f-criterion
Visitors: 326

10. The architecture of a production processes monitoring system in terms of geographically distributed production [№2 за 2019 год]
Authors: G.M. Solomakha (gsolomakha@yandex.ru) - Tver State University (Professor), Ph.D; S.V. Khizhnyak (stanislav.khizhnyak@gmail.com) - Tver State University;
Abstract: The paper describes an architecture of a production process monitoring system, which provides an op-portunity to receive relevant and detailed information on geographically distributed production, as well as to observe indicators that are aggregations of other indicators. The system enables working in a dis-tributed mode, which simplifies the implementation and operation in terms of geographical distribution of production. All components, subsystems, as well as the protocol and their coordination arrange-ments are focused both on using geographically distributed, and other productions. The paper presents the main drawbacks of the existing solutions regarding work under conditions of geographical distribution of production. It also outlines the requirements for a system architecture, which has the qualities that are necessary for working in such conditions, formed based on the identi-fied drawbacks. The paper describes the main subsystems and components of the proposed system, their purpose, functions and operation principles. There is a description of the interaction protocol be-tween subsystems and components. The approach to the development of such protocol is justified. There is a description of a data processing order, a data storage method, as well as its format and signa-ture. The data is presented in JSON format. The event model is selected as the exchange model between components. The paper justifies the approach to architecture design, presents the main technologies and tools for system development, and justifies this choice. There are architecture schemes in various combinations of distributed components. Several examples of the functioning of individual components and their in-teraction are considered. Based on the conducted research, the authors have made conclusions and proposed possible pro-spects for the development of the covered topic.
Keyword:
Visitors: 209

| 1 | 2 | 3 | Next →