ISSN 0236-235X (P)
ISSN 2311-2735 (E)

Journal influence

Higher Attestation Commission (VAK) - К1 quartile
Russian Science Citation Index (RSCI)

Bookmark

Next issue

2
Publication date:
17 June 2024

Articles of journal № 3 at 2018 year.

Order result by:
Public date | Title | Authors

1. Methods of developing graphics subsystem drivers [№3 за 2018 год]
Authors: Efremov I.A., Mamrosenko K.A., Reshetnikov V.N.
Visitors: 8977
The paper describes problems of software development for the problems of interaction between systems-on-a-chip (SoC) and the Linux operating system (OS). The OS architecture provides various instruments for creating a driver that is a component allowing the device data exchange using a software interface. The development of drivers for an open source OS is difficult due to continuous changes in functions and a kernel structure. The paper describes graphics subsystem structure and components. The subsystem is a component kit located in different address spaces of OS virtual storage. The components interact through a system call interface. Programming of a graphics engine is performed by filling a command buffer. Each application has a graphics engine context that contains its own command buffer and all necessary data used by the graphics engine for rendering/calculations: coordinates, normal vectors, colors, textures. There are several approaches to setting graphics mode. However, the most reasonable solution is using KMS module (Kernel Mode Setting). Key manufacturers of microprocessors and graphics cards commonly use these modules. It is necessary to ensure the interaction between OS kernel modules and user space through creating own specific system calls. These system calls regulate low-level operations with the device and allow taking full advantage of the graphics unit capabilities. Using FPGA-based prototyping platforms allows verifying software functionality, getting performance characteristics and finding errors in SoC hardware design at early stages. Debugging kernel modules is time-consuming due to limitations imposed both by a prototyping platform and the OS. In addition, the errors in a kernel code are difficult to reproduce, which also complicateы debugging of kernel modules. The paper considers some approaches to implementation of Linux OS KMS module and graphics subsystem components, which provide correct interaction of the OS and the SoC display controller.

2. Research on compression of raster images using artificial neural networks [№3 за 2018 год]
Authors: A.A. Genov, K.D. Rusakov , A.A. Moiseev, V.V. Osipov
Visitors: 10417
Modern rates of information growth stored on hard disks transferred over the Internet and local enterprise networks has made it necessary to solve the problem of compressing, transferring and storing data. Most of the transferred data is in the form of multimedia content. Nowadays, the algorithms for compressing visual information based on the neural network apparatus are becoming more popular. Unlike classical algorithms, which are based on the elimination of redundancy, these algorithms are based on artificial neural networks. The field is relevant due to the development of mathematical algorithms for network learning, which will improve existing compression methods in the future. The analysis of publications showed that nowadays there is no particular information about the influence of the artificial neural network architecture on a learning process and the quality of their work in real multimedia content. The important task is to select a network topology, which is most suitable for compressing visual information. The purpose of the article is to describe the capabilities of one of the types of artificial neural networks called a multi-layer perceptron in the area of compression and recovery of images of an arbitrary type. The paper analyzes topologies of artificial neural networks, algorithms for their learning, and the efficiency of their work. It also describes the architecture of a “bottleneck”, which is most often used in solving the problem of image compression and recovery. The authors give one of the ways of encoding and decoding data obtained during network operation. The paper describes a computational experiment and gives its results. The experiment showed that using a multilayer perceptron with an input vector of more than eight values turned out to be less effective. As a result, the authors propose the most suitable network architecture to use in practice.

3. Principles of software construction for electronic system thermal design [№3 за 2018 год]
Authors: Madera A.G., Kandalov P.I.
Visitors: 5847
The paper considers conceptual issues on developing a multifunctional software package for thermal design of complex electronic systems. The software package is intended to carry out mathematical and computer analysis of the nonlinear unsteady-state stochastic and determine thermal processes and temperature distributions in electronic systems of any structural complexity and the impact of destabilizing factors. A multifunctional software package should provide a graphical representation of both source data and computer modeling results in the form of tables, graphs, diagrams, etc. Computational algorithms that implement mathematical models should be written and optimized both for personal computers and supercomputer systems through their paralleling using Message Passing Interface (MPI) or Open Multi-Processing (OpenMP). The basic programming language for developing the developed software package is C#. It provides a cross-platform, development speed and convenience, support for selective optimization in C++ and C. An integrated development environment is Microsoft Visual Studio that runs only Microsoft Windows platform. It is possible to run development programs in Linux or Mac OS X using non-Microsoft .NET implementations like Mono. The authors consider the architecture of the developed software package divided into three levels. They are: a presentation level, a business logical level, and a database level that allows effective optimizing the software package, extending its functionality and supporting several platforms like Mac OS X or Linux.

4. High-level architecture of training simulation systems of complex technical systems [№3 за 2018 год]
Authors: A.V. Roditelev , Giatsintov A.M.
Visitors: 7630
The paper provides a detailed description of the training simulation system (TSS) architecture using the example of an air simulator prototype. A TSS visualization subsystem provides visualization of external environment and a control object using display devices. It should provide reproduction of the created virtual scene with a sufficiently detailed content that allows TSS operators to perform the assigned tasks successfully. The authors give the requirements for TSS subsystems, including those for the TSS visualization subsystem. The developed architecture avoids high coupling of components and provides a unified approach to managing hardware, such as various input devices. Usually, a device has some peculiar properties: specific control software, closed information exchange protocols, different connector types. The developed plugin management systems allows taking into account various hardware features without modifying the main module and other subsystems. The created control interface works with pluggable modules. Plugins are self-sufficient and can be added or removed without violating the integrity of the system. Depending on the workload, data processing can be organized on one machine or each subsystem can operate on a separate machine. Each subsystem is a standalone software complex that may be developed by a third-party developer. The main module and its subsystems can operate on hardware complexes with different processor architectures, endianness (little or big) and operating systems. The paper also describes an algorithm that transforms geographic coordinates received from the modeling subsystem to the coordinate system used by the visualization subsystem.

5. Adequate interdisciplinary models in forecasting time series of statistical data [№3 за 2018 год]
Author: B.M. Pranov
Visitors: 8437
Statistical studies commonly use multivariate linear models to model and predict time series. Their application area is quite extensive. They are quite effective in a situation when a set of points depicting the objects under investigation in a multidimensional parameter space is located near a certain linear subspace (or its shift relative to the origin). Factor analysis easily reveals this effect. If there is no such subspace (linear set), nonlinear dependencies are used to construct more accurate models. In the economy, the Cobb-Douglas function is used to describe the dependence of enterprise profits on the number of employees and the value of fixed assets. It turns out that if we consider fires and other phenomena of society as a kind of its “production”, then the Cobb-Douglas function allows approximating a corresponding time series with a high degree of accuracy. As a result, we get a number of interesting models in the new subject areas. The results of calculations showed that the Cobb-Douglas function is good at approximating the time series of the total number of fires in the territory of the Russian Federation. The prognostic values calculated by such models are very close to the real ones. A significant number of European countries, as well as the United States use a similarly adequate approximation of the time series of the total number of fires. Such modeling is also appropriate for a tourism industry. The paper considers the models of total hotel income depending on the number of employees and the size of fixed assets.

6. Modeling brain activity recognizing anagrammatically distorted words [№3 за 2018 год]
Author: Z.D. Usmanov
Visitors: 6218
The object of research are natural language texts the words in which were corrupted by random letter transpositions. The authors analyze the ability of a human brain to accurately recognize the meaning of distorted texts. offer mathematical models how the brain decides the problem. The paper describes a mathematical model that explains how the brain solves the problem in cases when a) the first, b) the last, c) the first and last letters of words remain in their places, and all others are reset arbitrarily and, finally, in the most general case, d) when no letter is fixed and all letters within a word can be placed in any order. The explanation is based on the concept of a word anagram (in the broad sense, the set of its letters arranged in any sequence) as well as on the concept of an anagram prototype. A simplified mathematical model assumes that the brain perceives each anagram separately; recognizes it correctly if it has a single prototype. In the case when there are several such prototypes, the brain automatically selects the one that has the highest frequency of occurrence in texts. The acceptability of this model was tested in English, Lithuanian, Russian and Tajik, as well as in the artificial language such as Esperanto. For all languages, efficiency of the correct recognition of distorted text was at the level of 97–98%. If it is necessary to achieve higher indicators, one can refer to an extended idea in which the brain takes into account couples, and maybe triples of neighboring letter sets.

7. Models of enterprise information system support in lifecycle stages [№3 за 2018 год]
Author: Yu.M. Lisetskiy
Visitors: 9280
The article considers an enterprise as a complex organizational system, which requires a modern management information system for effective functioning. Such system enables information collection, storage, and procession to increase relevance and timeliness of made decisions. The problem might be solved based on complex automation of all industrial and technological processes and required resources management. The paper shows that the information system description is formed based on a lifecycle model, which defines the order of development stages and criteria of stage transition. An information system lifecycle model is a structure that defines a sequence of completion and interconnection of processes, actions and tasks throughout a life cycle. The structure of the information system life cycle is based on three groups of processes: primary (acquisition, supply, development, operation, maintenance), supplementary (documenting, configuration management, quality assurance, verification, attestation, assessment, audit, problem resolution) and organizational (project infrastructure building, project management, definition, life cycle assessment and improvement, training). The paper describes the most widely spread life cycle models such as waterfall, iterative and incremental (stage-by-stage model with intermediate control) and spiral. It demonstrates that the enterprise information system appears to be a passive category in the processes of study and design. This category functioning can be described using support life cycle models including composition, functioning and development models. Development of these three models appears to be an additional informational factor that enables structuring of the process of enterprise information system creation and functioning.

8. The English auction method for job scheduling with absolute priorities in a distributed computing system [№3 за 2018 год]
Authors: Baranov, A.V., V.V. Molokanov, P.N. Telegin, A.I. Tikhomirov
Visitors: 5521
The article considers the problem of job scheduling with absolute priorities in a geographically distributed computing system (GDS) when auction methods can be efficiently applied. Most latest papers use a market model where the subject of auction trades (goods) are computational resources, and their owners act as sellers. Buyers are users who participate in the auction to purchase computing resources for of their jobs execution. Such model assumes that customers have certain budgets in nominal or real money. Job priority is determined by the price that the user can pay to finish the job by certain time. The investigated GDS model differs from the known ones by thу fact that job priorities are absolute and assigned according to uniform rules. The main goal is the earliest execution of high-priority jobs. In this case, the concept of the user's budget becomes meaningless, and the classic auction models do not work. The authors propose a new approach when the subject of auction trades are jobs, and resource owners act as buyers paying for jobs with available free computing resources. Within this approach, the authors consider the English auction as the most preferred method for job scheduling with absolute priorities in GDS. The main characteristic of the scheduling algorithm, which is based on this method, is the duration of an auction. The paper presents experimental evaluation of the optimal duration of the English auction in reference to the average job processing time.

9. A package manager for multiversion applications [№3 за 2018 год]
Authors: Galatenko V.A., M.D. Dzabraev, Kostyukhin K.A.
Visitors: 3630
All software developers eventually face the problem of creating and distributing their software products. At the same time, it is necessary to take into account possibilities of supporting existing products, i.e. replacing old distributions with new ones. When using a quality distribution tool, developers are able to distribute their products to a wider range of platforms, as well as provide the necessary and timely support for these products. The authors of the article consider only UNIX-like systems, most of which include package managers as dpkg, yum. These package managers operate according to a standard concept of software installation in UNIX. The standard concept implies that programs are installed in standard directories such as /usr/bin, /usr/local/bin, and so on. When updating a program (package), it is common practice to replace old files with new ones. Such substitution strategy can be destructive. This means that after software update, some programs or libraries stop working. It is possible, for example, that a package manager itself may stop working after updating. A user is often in a situation when old versions of software are required to support compatibility. In this case, it is necessary to use the practice of building programs and libraries from source code and manual installation, such as “make install”. This kind of installation is irreversible and very dangerous, since in this case the files under control of a package manager may be deleted or replaced. The authors propose a package manager NIX [1] as a solution for the described problems. The most important advantage of this manager is that it completely excludes destructive impact on its part. This is achieved by installing each package in an isolated location controlled by a package manager.

10. Temperature model of potential distribution for non-uniform doping nanotransistors with the silicon-on-insulator structure [№3 за 2018 год]
Author: Masalsky N.V.
Visitors: 6223
The paper discusses development of a 2D analytical temperature model of potential distribution in a work area of a double gate thin-film field nanotransistor with the silicon-on-the-insulator structure with a vertically non-uniform doping work area in the form of the Gaussian function. Double-gate field transistors with the silicon-on-the-insulator structure are the leading representatives of an element basis for a new scientific direction that is high-temperature microelectronics since they are ideal high-temperature devices. For a stationary temperature case, in parabolic approximation using a special function the authors have received an analytical solution for a 2D Poisson equation. They also numerically investigated temperature dependences of surface potential distribution on doping profile parameters in the range of temperatures from 200 K to 500 K. For the selected layout rules, variation of doping profile parameters gives an additional opportunity of controlling the key nanotransistor characteristics along with thickness of work area and gate oxide of a front lock, which is important when analyzing applicability of nanotransistor structures. The authors show that structures with steep doping profiles are more heat-resistant in comparison with homogeneously doping ones. In order to increase the upper bound of a temperature range by 100 K, it is necessary to increase the work area doping level by times. Using a perspective transistor architecture for double gate field nanotransistors with the structure silicon-on-the-insulator allows increasing thermal firmness of their key electrophysical characteristics in comparison with double-gate field transistors with homogeneously doping work area and with their volume analogs. The results of simulation are in close agreement with simulation data received using the ATLASTM software package, which is commercially available for 2D simulation of transistor structures.

| 1 | 2 | 3 | 4 | Next →