ISSN 0236-235X (P)
ISSN 2311-2735 (E)

Journal influence

Higher Attestation Commission (VAK) - К1 quartile
Russian Science Citation Index (RSCI)

Bookmark

Next issue

2
Publication date:
16 June 2024

Journal articles №3 2012

21. Improvement of debugging and testing increase performance techniques in multiprocessor systems [№3 за 2012 год]
Author: Lavrinov G.A. (lavrinov@cs.niisi.ras.ru) - SRISA RAS;
Abstract: Very important element of multiprocessor computing system is communication network used by process to provide communication with other processors or memory. Together with VME, PCI Express, HyperTransport and other buses of inter-processor communication, RapidIO interface is developed as well. Microprocessor systems based on RapidIO suffer significant difficulty and present a serious problem at debugging and beginning testing stage of trial and developmental models. Full scale testing is made with operating systems (in this case – Linux and OS PB Baget 2.0 and 3.0), however, successful start of operating systems, communication network and processor units are supposed to provide proper operating performance. Test designer has only hardware and ROM program that gets control automatically after power is on or RESET signal is received. This article provides two testing and debugging techniques of multiprocessor systems realized with RapidIO interface that can work with minimum amount of hardware. The article also provides comparison of these techniques viewing effectiveness in use and testing stage made on its basis. Using UML sequence chart, the article presents protocol that implements integrated RapidIO console and I/O RapidIO communication protocol for gathering information about testing results. It is shown how to use specific RapidIO packages. These testing techniques of multiprocessor systems became a basis for performing system test based on 1890BM6YA chips .
Keywords: console, the method, multiprocessor systems, testing, RapidIO
Visitors: 11040

22. Techniques provided fof valnerabilities reduction in special real time software [№3 за 2012 год]
Author: (kostas@niisi.ras.ru) - ;
Abstract: This article is dedicated to automation of a programmer’s work, particularly in the area of vulnerabilities reduction and bugs in the program code. The article reviews details of the program design in technical facilities of automated code generation for special software (TFACG SS) and use of library contained in TFACG SS that reduces potential vulnerabilities appearing in new programs. The author provides taxonomy of typical vulnerabilities in real time programs, he makes review of each class of vulnerability, how often and why does the vulnerability occur, and its prevention using TFACG SS facilities. Some potential vulnerabilities take in account configuration of real time operating system. Taxonomy of vulnerabilities was made with a static estimating device and set of real time source code designed by Institute for Scientific Research of Russian Academy of Sciences. Set of the source codes includes 204 of the program modules (more than 111700 lines). The article finishes with example of reduction of potential vulnerabilities in real time source code generation program (PVSC RT), which is a part of TFACG SS. The article shows method of reduction of vulnerabilities using standard program patterns provided by TFACG SS, this method repaired all vulnerabilities that were found by static estimating device in PVSC RT. The library expansion for standard TFACG SS patterns and the supplementation of the static analyzer rules, which will include tests and conditions specific for real time operating system, are the main prospects of solutions presented in the article.
Keywords: source code, realtime systems, source code generation, software vulnerabilities, runtime errors, xml, uml, modeling, mathematical model, reliability, programming
Visitors: 12841

23. Algorithm of X-graph growth and principles of physics [№3 за 2012 год]
Authors: (koganow@niisi.msk.ru) - , Ph.D; (koganow@niisi.msk.ru) - , Ph.D;
Abstract: The work is focused on the current trend at the intersection of theory of automata and algorithms, graph theory, as well as mathematical physics. Over the last years the theory of growing X-graphs is developed, with regard to which the X-graphs by each of their points (X-element) simulate elementary interaction of two initial particles with generation of two resulting particles. Growth of such graph simulates obtaining by the observer of information on physical processes taking place in its space-time neighborhood. The algorithm of incremental formation of X-graph is studied, which meets a set of requirements necessary for discrete space-time model in quantum physics. Special attention is given to implementation of causality principle for the algorithm that makes its interpretation as a physical process observer model to be correct. A new algorithm possesses useful properties, which were not present in the earlier proposed analogous algorithms. The main of them is independence of probability of completion of cause-unbounded by pairs peaks set from the order of introduction of those peaks. The basis of this algorithm is a new way of selection of edges for addition of a new X-element. This is done by means of random paths to the boundary from the randomly chosen peak from the number of peaks which are already present in the graph. Algorithm is interesting from the point of view of the theory of self-organizing of complex growing systems. Its modification and variations of initial states allow models of various systems of pair-wise interactions to be built.
Keywords: randomize algorithm, causality principle, increasing graph, space-time, oriented graph
Visitors: 8145

24. Microbenchmarks for microprocessor RTL-models performance assessment [№3 за 2012 год]
Author: (nikolina@cs.niisi.ras.ru) - ;
Abstract: An approach for evaluating and monitoring performance of the microprocessor on design stage is considered. The technique to estimate performance of separate blocks is offered, any potential influence of others is thus ignored. We propose a test suite for evaluating performance of microprocessor RTL-models (Register transfer level). The test set consists of the short programs (microtestbenches) directed on performance evaluating of separate blocks. A choice of test set for different modules is realized on taking into account features of its work. An article presents a number of test situations for evaluating such modules as instruction fetch and dispatch buffer (IFDB), floating point unit (FPU) and memory management unit (MMU) for MIPS-like architecture. For the analysis of run time performance counters are used, which are the parts of control coprocessor registers of microprocessor. Automation for creating test cases, regression performance measurement and visualization of performance evaluation results are proposed. During reasonable time the test system allows to receive results of a performance evaluation and to compare it with results of the previous versions of RTL model, or with reference values. Also the impact of performance measurements on architecture of future chip is considered. The possibility to investigate the influence on microprocessor performance of such factors as changing memory frequency is shown. The results of measurements are shown on example of performance evaluation of superscalar microprocessor which is developed in SRISA RAS. The results were confirmed on final crystal.
Keywords: regression performance evaluation, performance evaluation, microbenchmarks
Visitors: 6085

25. Role of stochastic testing in microprocessors functional verification [№3 за 2012 год]
Author: (osipa68@yahoo.com) - ;
Abstract: With the growth of the performance requirements of modern IC, including microprocessors and Systems-on-Chip, their development complicates considerably. It has become a multistage process, and there are many sophisticated tasks to be done on each stage. One of the most labor-consuming tasks is design functional verification. Its goal is to approve a conformity between implementation of a design and it's specification functional requirements. While it does not yet have a general solution, considering modern IC designs complexity, several complementary approaches were developed to address it. One of them is stochastic testing. It was applied for researching of the MIPS64 architecture microprocessors in Science research Institute for system analysis of RAS. The method is based on the test program execution simulation. Test programs are generated automatically from the given template. Instructions, arguments and settings for the test are chosen randomly considering given biases and constraints. This paper is a review, aiming at specifying the role of stochastic testing with its application scope, advantages and disadvantages. In the introduction functional verification is considered in general, as a part of IC design workflow. Then, most well-known verification approaches are reviewed, their underlying ideas analyzed briefly. Particularly, simulation-based methods are considered. Finally, stochastic testing method is described in the given background. Conclusions concerning its advantages and disadvantages are illustrated with some results of its application in SRISA RAS.
Keywords: test coverage metrics, random test generation, stochastic testing, simulation, RTL model, functional verification, microprocessor
Visitors: 10165

26. Verification of a microprocessor and its RTL-model by means of ОС Linux user applications [№3 за 2012 год]
Authors: Chibisov P.A. (chibisov@cs.niisi.ras.ru) - Federal State Institution "Scientific Research Institute for System Analysis of the Russian Academy of Sciences" (SRISA RAS) (Senior Researcher), Ph.D;
Abstract: This article covers the methods of verification and testing of modern microprocessors. The special attention is given to a method of testing the RTL-models, FPGA-prototypes and test crystals of microprocessors by real user applications for the Linux operating system. The interrelation of these objects and degree of a discussed technique applicability to each of them in a context of the general verification plan are considered either. The article lists the merits and demerits of the method. As simulation speed of programs on RTL-model of a microprocessor is extremely slow, it is offered to use the cut-restore mechanism of a model state for splitting all instruction sequence of an operating system booting into a set of subsequences which are carried out in parallel on different computers. Existence of a large quantity of freely distributed programs with an open source code with the built-in automated self-test mechanisms makes it possible to mark out an applications launch for Linux OS in a separate approach for testing of universal microprocessors. The described method doesn't exclude, but serves as a supplement to a modern set of methods and means of testing and verification of microprocessors and their models. Many authoritative developers and manufacturers of microprocessors recognize the usefulness of the earlier booting of any operating system on the RTL-model under development. Success in this operation often makes the developers more confident that their work is done correctly, than tens of thousands of executed tests. The article provides the example of the representative test set which makes it possible to use the ready-made user software packages, as well as the examples of the test program sources. Besides, it considers the general algorithm of actions to find a bug in the microprocessor and gives examples of bugs revealed in the microprocessor with MIPS64 architecture.
Keywords: Linux, post-silicon validation, first-pass silicon, FPGA-prototype, RTL model, microprocessor verification
Visitors: 11156

27. Metadescriptions and cataloguing of scientific information resources of the RAS [№3 за 2012 год]
Authors: () - , Ph.D; A.B. Zhizhchenko () - Federal Research Center for Computer Science and Control of RAS (Head of Department), Ph.D; () - , Ph.D; () - , Ph.D; Serebryakov V.A. (serebr@ultimeta.ru) - A.A. Dorodnitsyn Computing Centre of RAS (Professor, Head of Department), Ph.D; A.N. Sotnikov (asotnikov@iscc.ru) - Joint Supercomputer Center of RAS (Professor), Ph.D; () - ;
Abstract: A large part of scientific knowledge is formalized in the form of electronic resources – data and knowledge bases, electronic reference books, etc. Work with electronic resources, including their adaptation to the subject area, systematization and accumulation of data, achieved an equal status with theory and experiment. There appeared such subjects as bio- and geoinformatics, which subject of study is submission of complex data. However, with the spreading of databases and similar means deep problems arose caused by lack of interoperability. Autonomy of resources functioning, diversity of data formats and structures, lack of data presentation standards – not all the reasons complicating the data exchange. In the global and domestic practice in recent years there have been selected approaches to possible resolving problems using versions of XML language, for standardizing of metadata system and terms dictionaries within a certain area of expertise, such as CML versions for submission of chemical data, MatML – for material science, ThermoML – for thermodynamics. An insistent in elaborating principles and technologies for integration of many RAS resources led to the formation of an extensive program on creation of so-called Data Centre. It is expected that this project will help to overcome the fragmentation and limited availability of digital resources in the form of databases, electronic publications, data-processing tools, supported by various institutes of the Russian Academy of Sciences. In this work as the first phase of the integration is offered the system of resources certification, adequately reflecting the subject area, resource types, access conditions, etc. A portal is developed on which there is an extensive set of metadata for each registered resource.
Keywords: XML, ontology, portal, metadescription, metadata, data integration, information resources
Visitors: 10409

28. Data integration and query language in large information infrastructures [№3 за 2012 год]
Authors: (kvn@keldysh.ru) - , Ph.D; (akul87@mail.ru) - ;
Abstract: Automation of various forms of professional activity by means of computer technologies generates arrays of information stored in databases. This information is used, first, inside organizations, but can be helpful for solving important tasks outside their borders. Development of the corresponding applications is considerably complicated in the absence of the specialized system tools supporting access to data from multiple databases. In this direction, named data integration, the general, application-independent methods, allowing consolidation of heterogeneous databases are developed. The tools created on such basis are used in practice, however the problem of their scalability on the number of integrated databases remains open. In the paper, an approach to the problem of mass integration (tens and hundred databases) is described. The two questions are considered that seems to be the most essential under these conditions: the method of data integration and the type of informational queries. The integration method allows defining a representation (the global scheme) in which the data of the integrated databases form a common unified space. The method is aimed at the creation of information infrastructures with dynamically changing database set: change of a set does not require modification of the global scheme or existing applications. The language of search queries is an SQL-92 extension, with the difference that the operations are executed on subsets of databases. In addition, databases are not addressed explicitly: descriptive information – meta-attributes – is used for their se-lection. Such type of queries allows creating applications capable of processing data from varied sets of sources.
Keywords: OGSA-DQP, OGSA-DAI, massive data integration, distributed query, informational grid
Visitors: 6726

29. Software reliability analysis based on the model of inhomogeneous poisson process and bootstrap methods [№3 за 2012 год]
Authors: Guda A.N. (guda@rgups.ru ) - Rostov State Transport University, PL Rostovskogo Strelkovogo Polka Narodnogo Opolcheniya (Professor, Vice-rector), Ph.D; Chubeyko S.V. (greyc@mail.ru) - Rostov State Transport University, PL Rostovskogo Strelkovogo Polka Narodnogo Opolcheniya, Ph.D;
Abstract: A new mathematical model of software reliability is described, based on mathematical model of inhomogeneous Poisson process. The basic idea of the proposed in the article forecasting method is a method of reproduction of data samplings, containing two original sets: the cumulative program execution time and number of errors committed during this time. As a method of a randomized sampling reproduction a bootstrap technology is taking which uses random quantities, having a Poisson distribution. Algorithms of parameter estimate and forecasting indicators of software reliability are suggested. The first algorithm is used to assess the intensity of errors expected in subsequent versions of the software. The algorithm uses a random number sensor on which basis randomized sampling and random arrays are arranged under Poisson law. The second algorithm makes it possible to evaluate the intensity of error detection. He uses data samples from the first algorithm and operates according to the method of maximum likelihood. The article describes the general procedure for forecasting the expected number of errors that can occur during a subsequent program run at a certain time interval, following a cumulative period of observation. The proposed forecasting method was realized in the form of a program written in Pascal programming language in the free programming environment PascalABC.NET. It also describes examples of using forecasting software with some test data.
Keywords: the non-homogenous Poisson process, the non-homogenous Poisson process, reliability of the software
Visitors: 7155

30. Profile sinchronization of MS SharePoint Portal Server 2003 users with external data source [№3 за 2012 год]
Author: Ermakov D.G. (Ermak@imm.uran.ru) - Institute of Mathematics and Mechanics Ural Branch of the Russian Federationn Academy of Sciences;
Abstract: In the process of development of enterprise portals one of most important problems is data synchronization published in such portal with data used by other applications and stored in different formats in DBMS. When the data placed in portal can be edited by a user or other application, they should be synchronized. When one of the data sources assigned as the master source and the other one is assigned as a slave source it is not difficult to arrange synchronization. When bidirectional structure is needed, it shall be necessary to provide «direct» and «reverse» synchronization. The article discusses synchronization solution of MS SharePoint Portal Server 2003 (SPS) users with a data source – inherited personnel department subsystem and «reverse» synchronization of this external source with profile data of SPS users. The article presents two ways of data collection from user profiles: with SQL query and use of object SPS model. Synchronization is made with temporary XML file. Such solution changes source of synchronized data without significant change in existing software. In addition, it provides opportunity to the third party to transfer or receive the data from it. The article contains some scripts written in PowerShell scenario language included in standard MS Windows package with implementation of «direct» and «reverse» synchronization. Beside synchronization, this approach can be used for the data transfer in case of changing MS SharePoint platform to a different one.
Keywords: SQL, MS SharePoint Portal Server 2003 (SPS), SQL, ms windows powershell, user profile, reverse synchronization, synchronization, back synchronization, synchronization
Visitors: 8923

← Preview | 1 | 2 | 3 | 4 | 5 | 6 | Next →