ISSN 0236-235X (P)
ISSN 2311-2735 (E)

Journal influence

Higher Attestation Commission (VAK) - К1 quartile
Russian Science Citation Index (RSCI)

Bookmark

Next issue

2
Publication date:
16 June 2024

Articles of journal № 3 at 2020 year.

Order result by:
Public date | Title | Authors

11. Parallel processes and programs: models, languages, implementation in systems [№3 за 2020 год]
Authors: Kutepov V.P., A.A. Efanov
Visitors: 5726
The authors propose models and languages of parallel processes and programs, which have wider pos-sibilities for describing parallelism in comparison with well-known models and languages created for this purpose. The concept of an actor, the execution of acts, and the interaction of actors in the performance of a process are the basic elements of a process description. The general form of defining a process is a set of recursive equations, the right parts of which are groups of actors, which interact during the process execution. The graphic and textual form of the description of processes and the principle of initializing the execution of an act by the readiness of the signals (data) arriving at the inputs of the corresponding actor from the actors influencing it creates the necessary conditions for an effective description of the processes, their analysis, modification, and parallel execution. The interpretation of process actors as procedures provides a direct transition from the process language to the high-level language of modular multi-flow parallel programming. The paper considers the main implementation problems of the proposed languages on computer sys-tems: the unambiguous naming of the acts generated during the interaction of the process, the organiza-tion of process control on computer systems. It is known that classical algebraic models of processes belong to R. Milner and C. Hoar. In this pa-per, we consider a more general model and language of processes, which, on the one hand, makes it possible to describe groups of interacting processes when the general process is defined in the form of a system of process equations. On the other hand, a dynamically generated structural description of cause-and-effect relationships between process actors in groups and the use of an actor’s activation mechanism upon receipt of the necessary signals (data) to its respective inputs provide, without limita-tion, the implementation of parallelism and asynchrony when performing process acts. The use of vari-ables in the system of equations describing the process creates the conditions for the dynamic genera-tion of process groups. The model described in the article is the basis for creating a parallel programming language imple-mented on computer systems.

12. Application of transfer learning for semiotic models to the foraging problem with real robots [№3 за 2020 год]
Authors: V.V. Vorobev , M.A. Rovbo
Visitors: 5481
The paper considers the problem of applying a transfer learning algorithm for agents with semiotic models of the world to the foraging task with real robots. The robot needs to collect randomly placed food items, which when collected appear in a new random place within the polygon. The mobile robot is controlled by an agent with a model of the world that describes sensor readings as predicates. The agent makes decisions based on a state-action value estimation table for Q-learning. The agent is pre-trained on a simplified model environment with discrete states in which actions are performed with a guaranteed deterministic outcome. In a real environment and its model, taking into account physics, actions can be performed incor-rectly due to a scheduler error, localization errors, and other problems, and the data analysis of sensor information gathered from the continuous world determines the environmental state. The authors show the implementability of the corresponding interfaces and portability of the con-cept from a simplified model environment both to its more complete model that takes into account physics and a real robot. The transfer learning application is successful, but the final performance of the agent is reduced (probably due to the incorrect assumption of the determinism of the world in a real environment) and the robot needs additional learning after the transfer. Gazebo was used as a simulator that takes physics into account while the real polygon was equipped with special markers and cameras for localization. The authors also used elements of augmented reality in the form of a virtual food module.

13. The simulation program for thermophysical processes in ore-thermal furnaces when changing the control actions [№3 за 2020 год]
Authors: A.Yu. Puchkov , S.V. Panchenko , M.V. Chernovalova
Visitors: 3058
The authors developed an algorithm and software for modeling the thermophysical processes of the re-actor zone of an ore-thermal furnace in response to changes in control actions. The algorithm is based on two mathematical models that use a hydrodynamic approach to describe the sludge formation in electrothermal processes of the processing of pelleting products of roast machines in ore-thermal fur-naces. The first model describes thermophysical processes and represents the stoichiometry of the re-sponse components and the non-steady states of a phosphoric ore-thermal furnace during the pro-cessing of pelletized industrial waste in the approximation of lumped parameters. The second model describes the sludge formation process and characterizes the particle entrainment. These particles are a source of sludge contaminants of phosphorus at the furnace outlet. The model is based on the fact that the dusting key factor is the melt drip entrainment by bubbling gas. The considered furnaces are used in a multistage process for producing yellow phosphorus from the waste of apatite-nepheline ores, which gather in large quantities in the dumps of mining-and-processing integrated works. Representing industrial deposits of mineral wealth, such dumps need to be recycled, since, in addition to the relatively rich chemical composition, which also needs to be used, they have an appreciable loss to the environment and human health. Therefore, the technological system devel-opment for waste disposal and the software creation that simulates the processes in such systems with the aim of their comprehensive optimization is a relevant research problem. The software for modeling the thermophysical processes of the reactor zone of the ore reduction furnace will be further used as an integral part of the software complex, whose problem is to calculate the optimal (energy-efficient) modes of the technological system, including granulators, conveyor-type calciners, and ore-thermal furnaces in order to ensure environmentally friendly processing apatite-nepheline ore waste.

14. Software for collecting, processing and transmission of technical condition data surface of the electric motor collector [№3 за 2020 год]
Authors: A.L. Zolkin , V.S. Tormozov , T.N. Bushtruk, M.V. Petrushova
Visitors: 3903
The paper explores a complex methodology and presents the results of developing a software product for monitoring the wear of collector plates of electric locomotives in the conditions of repair produc-tion using modern information technologies. The authors offer a software product that allows you to quickly and with a high degree of accuracy to calculate the wear parameters of the collector plates of electric locomotives, depending on their mileage. The low reliability of locomotive electric motors leads to the appearance of failures in the way. At the same time, damage to electric motors leads to the greatest expenditure of time and money for un-scheduled repairs, as well as the greatest downtime of trains on the line. One of the most difficult to manufacture and responsible for the operation of locomotive electric motor units is the collector. Its surface reflects the presence of hidden defects and violations of the operating mode of the electric mo-tor. Thus, the information technologies using in the repair and testing of locomotive electric motors, the electronic database creation, and resource forecasting in order to optimize the terms of trouble-free op-eration are now becoming relevant. When repairing electric motors in locomotive depots, it is proposed to use software that will im-prove control over the quality of this node repair. The introduction of software intended for the collec-tion, processing, and transmission of data on the technical condition of the surface of the electric motor collector in motive-power depots will provide a number of positive, but economically difficult to con-sider factors into the repair industry, including increased discipline and work culture, the need for ac-curate adherence of measurement technology.

15. Software for identification and correction of non-standard errors of measuring instruments in the process of induction soldering [№3 за 2020 год]
Authors: A.V. Milov , Tynchenko V.S., Murygin, A.V.
Visitors: 4448
The paper presents the development of a software module for the identification and correction of non-standard er-rors in the process of induction soldering of spacecraft’s waveguide paths. The paper analyzes the features of the technological process of induction soldering of thin-walled aluminum waveguide paths of spacecraft. Based on this analysis, the authors concluded that it is appropriate to use intelligent methods to identify and correct non-standard errors of measurement tools in the process of induction soldering. The authors also concluded that the chosen method of identifying non-normative errors as a software module for the existing control system for in-duction soldering of spacecraft waveguide paths. The developed module allows us to configure the initial parameters to create and train an artificial neural net-work used to solve the problem of non-standard measurement error correction in the process of induction solder-ing. The module allows us to configure the artificial neural network in terms of activation functions, hidden layers number, artificial neuron number on the layer, and training duration. The resulting neural network model can be uploaded to disk for use in the induction soldering automated control system. The module also implements the functional of generating a control action for the correction of non-standard errors. Using the module allows us to improve the quality of the induction soldering process control by reducing the influence of errors on the course of this process. The software product is an application for the Windows operating system that is compatible with versions of Windows XP/7/8/8.1/10. The application was developed using the Python programming language and the PyQt5 framework. Data from actual technological processes were the basis for checking the effectiveness of the proposed software module The test results showed high efficiency of identification and correction of non-standard errors in the process of induction soldering.

16. Technical object projection in the image into metric space using deep neural networks for the detection problem [№3 за 2020 год]
Authors: Tolstel, O.V., Shirkin, A.E., Kalabin, A.L.
Visitors: 3928
The paper presents the image vectorization algorithm containing technical objects. As technical objects are machine-building parts, fasteners, hardware. Vectorization refers to the transformation of an image into a vector for which the Euclidean distance has semantic meaning. This algorithm was created to im-prove the system of assessing the object position, where there is a problem of a variable number of types of objects for recognition. The author proposes an approach to the formation of a metric space for images, where the image transformed into a vector by the metric l2 can be compared with the image by the standard, thereby solving the problem of a non-constant number of classes. To add a new class, it is enough to add a ref-erence image represented as a vector to the system and find the distance to it. If it is smaller than other images, then this reference will represent the type of object that was submitted to the input system. This approach is implemented in deep neural networks, where the last layer is removed and the penultimate layer is left, which represents the upper level of features extracted from the image. Such a neural net-work undergoes a learning process using the Triplet loss function, teaching the neural network to vec-torize the image into metric space. The program implementing the proposed algorithm is developed in Python 3.6 using the Jupyter Lab integrated environment for the Ubuntu 18.04 operating system. The results of the experiment on the use of the proposed algorithm, which consisted in attributing the obtained images to a particular image - standard, are presented. To assess the quality of the algo-rithm, ranking metrics were used for search problems, where only the very first object in the list of nearest objects is evaluated. The developed algorithm can be used for technical vision systems for robotic manipulators, and in the future this algorithm will be used as part of the control system for capturing objects by a robotic manipulator.

17. Support vector method modification development for solving the classification problem with domain restrictions [№3 за 2020 год]
Authors: Mikhailov I.S., Zayar Aung , Ye Thu Aung
Visitors: 8739
One of the data mining methods for solving the classification problem is the support vector machine (SVM). The method’s main idea is to translate the source vectors into a higher-dimensional space using the kernel method to ensure the linear separability of classes and to find a separating hyperplane with the maximum margin between the hyperplane and the reference vectors in this space. Despite the high accuracy of the method, it also has disadvantages. These include the lack of a gen-eral approach to automatic kernel selection, as well as the high computational complexity of the meth-od. In this regard, the authors developed a modification of the support vector algorithm (FS-SVM algo-rithm) to solve the classification problem with restrictions on the problem domain. The authors formu-lated the classes "functional separability" restriction, imposed on the problem domain. It will allow ap-plying the FS-SVM algorithm. There is a theoretical study of these assumptions validity in the paper. The paper introduces formal definitions of "functional separability" based on the separating func-tion continuity and monotony and on the basis of the derivative of discriminant function. The authors show a "functional separability" concepts and classes convexity comparison. The proposed FS-SVM algorithm main blocks are considered in the paper: the search for support el-ements, the separating hypersurface points determination, the separating hypersurface construction as a piecewise-linear function in projection onto the coordinate axes under consideration. In further algo-rithm development, this function is proposed to be approximated by the Chebyshev polynomial to ob-tain a smooth curve. The paper presents as an example of a problem domain, in which the FS-SVM algorithm can be ap-plied, the oil-water-gas flow regimes classification problem, based on initial data obtained at the oil well mouth.

18. Development of a problem-oriented system for managing the process parameters of an underground well uranium leaching mine [№3 за 2020 год]
Author: D.R. Podrezov
Visitors: 3602
The management information system is designed to make timely, effective, and well-grounded man-agement decisions in the field of operational and tactical production planning. The problem-oriented management system is based on the productive-economic model of the operation of an enterprise. The main model function is to calculate the production and economic indicators of the enterprise's func-tioning under a given amount of resource expenditure during the implementation of the production program under various scenarios of the development of environmental factors. The prime purpose of the system is to model the enterprise activity and search for a set of economic, production, and techno-logical indicators to achieve production program goals. The purpose of the work is to create a problem-oriented system for managing indicators of techno-logical processes of an underground well leaching mine for implementing information and analytical support of procedures and processes that allow to quickly analyze, model, predict and visualize vari-ous scenarios for the implementation of business processes and the production program of the enter-prise. The study is novel in that it is possible to implement a procedure for modeling performance indica-tors and evaluating reserves of technological blocks of an underground borehole uranium leaching mine in order to determine the resource potential of the Deposit as a whole, based on the analysis of the dynamics of basic technical and economic indicators and scenario conditions for the development of business processes of the enterprise. The system use will make it possible to create targets for the enterprise operation with different planning horizons. For the purposes of operational and tactical production planning, a budget is calcu-lated for material resources, and a portfolio of projects for the production program and consolidation reporting forms are formed according to external environmental development scenarios. The developed procedures make it possible to implement the production planning directly functions on mine individual technological blocks and to form optimal production programs for the technological polygon development as a whole on the basis of the opened reserves obtained model versions and the formed conditions and restrictions system.

19. Semantic models and the method of coordinated development of knowledge bases [№3 за 2020 год]
Authors: N.A. Gulyakina, I.T. Davydenko
Visitors: 4598
The paper discusses an approach to easily modifiable hybrid knowledge bases creating based on se-mantic networks with basic set-theoretic interpretation. The paper proposes a semantic model of knowledge bases, including a set of top-level ontologies that ensure the compatibility of various types of knowledge. A distinctive feature of the proposed model is the use as a basic element of the model of not atomic elements of the semantic network, but signs of knowledge base fragments, called structures. The pro-posed model provides the consistency of different types of knowledge within the knowledge base, as well as the ability to knowledge bases structuring according to an arbitrary set of features. In addition, the paper describes a method for knowledge bases development based on this model, focused on the concerted development of a knowledge base by a distributed team of developers. A dis-tinctive feature of the method is its focus on the reuse of previously developed knowledge bases com-ponents of various complexity, as well as the presence of a formal ontology that describes the activities of the developers of knowledge bases in accordance with the method. The proposed method is implemented in the form of a system for the collective development of knowledge bases, which is embedded as a typical subsystem in each developed system and thus pro-vides the possibility of developing a knowledge base directly during its operation. The use of the proposed models, methods, and tools allows ensuring semantic compatibility of vari-ous fragments included in the knowledge base, as well as reducing the time spent on the development of knowledge bases.

20. Data structures and the Quine–McCluskey method modification for minimizing normal forms [№3 за 2020 год]
Authors: Gdansky N.I., A.A. Denisov , Kulikova N.L.
Visitors: 4542
Logical methods of analysis and synthesis of systems of various nature are usually based on the use of descriptions of their structures and processes in them in the form of Boolean functions, which are equivalently reduced to conjunctive normal forms to unify the representation. For initial systems and processes, as a rule, the main criterion for optimality is the minimum number of components that make up their components, which simplifies the structure, reduces cost and increases reliability, so for model conjunctive normal forms, the problem of minimizing them is of great practical importance. The algorithm’s efficiency that processes complex objects (including conjunctive normal forms) significantly depends on the main and auxiliary data structures used to represent these objects. There-fore, based on the analysis of existing structures, a new complex three-level data structure for repre-senting conjunctive normal forms has been developed. The lower level of the entire combined structure is the data structure for a single clause. Lists of clauses with the same number of letters form the middle level. An array of lists ordered by the lengths of their clauses sets the entire conjunctive normal form at the top level. The using lists, pointers to them and single elements, and the ordering of clauses by lengths make it possible to radically reduce operations for rewriting information and ordering it in the process of con-verting conjunctive normal forms. Based on the developed complex data structure, the authors developed a modification of the well–known Quine-McCluskey method used to reduce perfect normal forms. The combined use of the proposed data structure and the modified method makes it possible to sig-nificantly reduce the total number of operations while minimizing conjunctive normal forms compared to the basic version of the Quine–McCluskey method. This is achieved by reducing data re-processing, and by using special logical conditions, an addi-tional reduction is achieved in the total number of checks of letters in clauses when comparing them.

← Preview | 1 | 2 | 3 | Next →