Efficiency of national innovation systems through the prism of composite innovation indexes: do they tell the same story?

Innovation systems concept. Performance, Efficiency and Effectiveness. NIS performance measurement techniques. Composite innovation index approach. Statistically significant correlation between GII and EIS efficiency scores. Works of Christopher Freeman.

328,4 K

. ,

, , , , .


National Research University Higher School of Economics

Institute for Statistical Studies and Economics of Knowledge


Efficiency of national innovation systems through the prism of composite innovation indexes: do they tell the same story?

Student: Elena Kashinova

Group: 161

Supervisor: Vitaliy Roud

Submission date: 17.05.2018

Moscow, 2018

Table of Contents

List of Abbreviations

List of Tables

List of Figures




National innovation systems concept

Performance, Efficiency and Effectiveness

NIS performance measurement techniques

Composite innovation index approach




Discussion and Conclusion


List of Abbreviations

AHP - Analytical Hierarchy Process;

BCC - Banker-Charnes-Cooper model;

CCR - Charnes-Cooper-Rhodes model;

ccTLD - Country-code top-level domains;

CIS - Community innovation survey;

COLS - Corrected ordinary least squares;

COMTRADE - Commodity Trade Statistic Database;

CRS - constant returns-to-scale;

CWTS - Centre for Science and Technology Studies (Centrum voor Wetenschap en Technologische Studies);

DEA - Data Envelopment Analysis;

DMU - decision-making unit;

EIS - European Innovation Scoreboard;

EU - European Union;

EUIPO - European Union Intellectual Property Office;

FDI - foreign direct investment;

FTE - full-time equivalence;

GDP - gross domestic product;

GERD - Gross domestic expenditure on R&D;

GEM - Global Entrepreneurship Monitor;

GII - Global Innovation Index;

gTLDs - Generic top-level domains;

ICT - Information and communication technology;

IEA - International Energy Agency;

ILOSTAT - International Labour Organisation database of statistics;

IMD - Institute for Management Development;

IMF - International Monetary Fund;

INSEAD - European Institute of business management (L'Institut europen d'administration des affaires);

ISO - International Organization for Standardization;

ITU - International Telecommunication Union;

IUS - Innovation Union Scoreboard;

JRC - Joint Research Center;

kWh - kilowatt hour;

MCDA - multiple criteria decision analysis;

MERIT - Maastricht Economic and Social Research Institute on Innovation and Technology;

NIS - National Innovation System;

OECD - Organisation for Economic Co-operation and Development;

PPP - purchasing power parity;

PPS - purchasing power standard;

QCA - Qualitative Comparative Analysis;

PISA - Programme for International Student Assessment;

R&D - Research & Development;

SDEA - Stochastic Data Envelopment analysis;

SFA - Stochastic Frontier Analysis;

SII - Summary Innovation Index;

SME - small and medium enterprises;

STI - science, technology and innovation;

TOPSIS - Technique for Order Preference by Similarity to Ideal Solution;

UN - United Nations;

UNESCO - United Nations Educational, Scientific and Cultural Organisation;

VRS - variable returns-to-scale;

WB -World Bank;

WEF - World Economic Forum;

WTO - World Trade Organisation.

List of Tables

Table 1: Key activities in systems of innovation.

Table 2: A taxonomy of frontier methods.

Table 3: Terms and definitions of DEA.

Table 4: Composite score calculation for Russia.

Table 5: EIS measurement frameworks.

Table 6: Final dataset.

Table 7: List of variables.

Table 8: Efficiency scores.

Table 9: Pearson's correlation between GII and EIS efficiency scores.

Table 10: Spearman's rank correlation between GII and EIS efficiency scores.

Table 11: Input and output weights under variable returns to scale.

Table 12: Super-efficiency scores, ranks and reference sets.

Table 13: Spearman's rank correlation between GII and EIS super-efficiency scores.

Table 14: Radial input and output slacks.

List of Figures

Figure 1: Graphical depiction of DEA.

Figure 2: Concept of efficiency and returns to scale.


Benchmarking countries' innovation performance is one of the key aspects of national innovation systems (NIS) concept. During the evolution of NIS studies, a wide range of methodological approaches to measuring and comparing NIS performance was developed. One of the most popular ways to benchmark NISs is through the use of composite innovation indexes - indicator aggregates aimed at addressing various aspects of NIS performance. The two most prominent innovation indexes today are Global Innovation Index (GII) and European Innovation Scoreboard (EIS). Their measurement frameworks differ significantly and they are both used by policymakers to compare their countries' innovation performance with the rest of the world.

However, composite innovation index approach also met criticism from some authors for oversimplification, partiality, lack of methodological transparency and clarity. A popular way to enhance this way of evaluation is to combine composite indexes with various methods of assessing efficiency. One of them is Data Envelopment Analysis (DEA) - a non-parametric efficiency frontier estimation method that is frequently used in innovation studies. In this study, DEA is used to test the equivalence between GII and EIS in order to find whether they can substitute each other in policymaking activities.

An additional aim of this study is to look at the position of Russia and identify weaknesses that lead to low ranks in both indexes. It was discovered that Russia is inefficient in terms of both indexes and the reason for inefficiency is that Russian human resources and investments are improperly disposed, resulting in insufficient outputs.


Since the early stages of the development of the concept of national innovation systems (NIS), scholars were looking for the ways to assess and compare the performance of individual NISs. The complexity and multidimensionality of this concept turned this task into a methodological issue difficult to resolve. Most of the solutions rely on the analysis of multiple statistical indicators that are reflecting different aspects of NIS development. The easiest way is to look at separate indicators and draw comparisons from disaggregated data. More common solution is to aggregate several indicators into a composite innovation index. Other approaches include various methods of statistical analysis, among which there are increasingly popular methods such as Data Envelopment Analysis (DEA). Many studies show how DEA can be used to estimate the relative efficiency of NISs by input and output innovation indicators. Recently new studies emerged where approaches are combined into a methodological mix to mitigate their inherent drawbacks. For instance, there are studies using composite innovation index' data to run DEA tests and calculate efficiency scores for NISs.

In this study, efficiency scores are calculated for two most prominent composite innovation indices - GII and EIS. By comparing these scores, it is possible to estimate how well these indexes reflect the efficiency of NISs they study in transferring innovation inputs into innovation outputs.

The primary aim of this thesis is to compare GII and EIS in terms of how accurately they reflect the efficiency of NISs. Thus, the hypothesis of the research can be formulated as follows:

H0: There is a statistically significant correlation between GII and EIS efficiency scores;

The confirmation of H0, therefore, means that GII and EIS embed similar efficiency concepts in their composition and they can be used equivalently in practical and policymaking activities.

The alternative hypothesis reads as follows:

H1: There is no correlation between GII and EIS efficiency scores.

The confirmation of H1 indicates that GII and EIS are based on different understandings of efficiency and the choice between them for practical uses should be properly justified. These statements will be considered in detail in Discussion and Conclusion section.

The hypotheses can be tested by estimating the efficiency frontiers from GII and EIS score values for countries present in both rankings. DEA was chosen as a method for this task, using several DEA model specifications including Super-efficiency DEA model used to make a full ranking of countries. The obtained DEA efficiency and super-efficiency scores were compared using Pearson and Spearman correlation tests.

The additional aim of this paper is to discover strengths and weaknesses of Russian position in GII and EIS. This was realized by analyzing slacks, weights and reference sets for Russia and other countries resulting from DEA efficiency and super-efficiency tests. Therefore, on the preparatory stage of this research Russia was included in the list of investigated countries by calculating EIS scores for Russia using appropriate data.

The paper is organized as follows: first section provides background information on the NIS concept and the existing NIS performance measurement techniques and approaches. Second part presents the methodology chosen for this study and the explanation of this choice. Third part contains the full list of steps taken in order to achieve the aims of the study. Fourth section presents the findings of the study, followed by the final, fifth section containing discussion of the results, conclusions and limitation of the study with propositions for future research.


National innovation systems concept

innovation systems concept performance

The development of NIS concept is generally associated with works of Christopher Freeman and Bengt-Ake Lundvall conducted in the beginning of 1980s (Cai, 2011; Mahroum and Al-Saleh, 2013). Freeman is considered as the one who coined the term national innovation system in its modern sense: the network of institutions in the public and private sectors whose activities and interactions initiate, import, modify and diffuse new technologies (OECD, 1997). The complex nature of this concept gave rise to many other definitions of NIS, some of which are overlapping to a certain extent, but still none is accepted as canonical (OECD, 1997). However, systemic approach to innovation describes the multidimensional nature of knowledge flows better than linear and sequential process models and, therefore, it is still considered as a current paradigm in innovation studies (Mahroum and Al-Saleh, 2013; Rothwell, 1992). The concept empathizes the interactive aspect of innovation process and qualify coordination problems between individual actors and institutions as a source of NIS inefficiency; therefore, it gained much popularity among scholars of evolutionary and institutional innovation theories (Soete et al., 2010).

Originally, NIS concept was focused on the components, or building blocks, of NISs: institutional elements and interrelations (linkages) between them (Edquist, 2011; Niosi, 2002). This branch of science, technology and innovation (STI) research consisted mostly of descriptive studies where NISs were investigated using case analysis and historical analysis (Cai, 2011; Lundvall, 2007). Various NIS structures were identified, according to countries' size and income (Nelson, 1993). Characteristics of NISs, such as path-dependence and lock-in, were intensively studied, partly explaining the discrepancies in NIS structures and development (Niosi, 2002).

Later the attention was drawn to the processes within NISs and their outcomes (Edquist, 2011). (Edquist, 2005) defined 10 key activities in NISs that he equaled to determinants of innovation process and later classified into four activity groups (Table 1).

Table 1: Key activities in systems of innovation

I. Provision of knowledge inputs to innovation process

1. Provision of R&D results and, thus, creation of new knowledge, primarily in engineering, medicine and natural sciences.

2. Competence building, for example, through individual learning (educating and training the labor force for innovation and R&D activities) and organizational learning. This includes formal learning as well as informal learning.

II. Demand-side activities

3. Formation of new product markets.

4. Articulation of new product quality requirements emanating from the demand side.

III. Provision of constituents for NISs

5. Creating and changing organizations needed for developing new fields of innovation. Examples include enhancing entrepreneurship to create new firms and intrapreneurship to diversify existing firms; and creating new research organizations, policy organizations, etc.

6. Networking through markets and other mechanisms, including interactive learning among different organizations (potentially) involved in the innovation processes. This implies integrating new knowledge elements developed in different spheres of the SI and coming from outside with elements already available in the innovating firms.

7. Creating and changing institutions--for example, patent laws, tax laws, environment and safety regulations, R&D investment routines, cultural norms, etc.--that influence innovating organizations and innovation processes by providing incentives for and removing obstacles to innovation.

IV. Support services for innovating firms

8. Incubation activities such as providing access to facilities and administrative support for innovating efforts.

9. Financing of innovation processes and other activities that may facilitate commercialization of knowledge and its adoption.

10. Provision of consultancy services relevant for innovation processes, for example, technology transfer, commercial information, and legal advice.

Source: (Edquist, 2011)

Devising and revising this list, Edquist aimed to provide a systemic view of processes and functions that NISs perform. Governments are able to influence the results of these activities to a certain extent, using public innovation policy, but the final outcomes also depend on external factors and internal contribution of NIS elements.

Agreeing that this list is not totally comprehensive, he also recognized the causality issues related to the mutual influence between innovation process and its determinants (Edquist, 2011). This brings up the discussion of numerous difficulties that scholars face when trying to capture and measure NIS performance:

1. The aforementioned problem of causality impedes inference. Taking activity 8 as an example: it is difficult to determine whether those NISs that invest a lot in incubation activities are performing better than others or if best innovative performers invest a lot in incubation activities.

2. The concept itself is very broad, hence, as (Nelson and Rosenberg, 1993) stated, it provides no sharp guide to just what should be included in the innovation system, and what can be left out. In measurement perspective, it means that creating a model and picking up indicators that describe NISs fully and adequately is exceedingly challenging.

3. Lack of theoretical ground. Scholars differ in their understandings of what NIS actually is: some call it a concept (Patel and Pavitt, 1994), some say it is an approach (OECD, 1997), some argue that it is a framework (Godin, 2009). There is only one consent point: NIS concept is still a concept, not a theory (Lundvall, 2007), and there is still a lack of common ground in departure points and definitions (Soete et al., 2010). This obscurity of formulation of NIS concept significantly impedes establishing a sound measurement framework.

Despite all the shortfalls, NIS concept plays an important role in innovation studies and policymaking. Its main achievement at the time of creation was expansion of the idea of innovation and taking into account various factors and institutions influencing innovation performance (Godin, 2009; Soete et al., 2010). More attention was paid to knowledge flows and distribution, to the capacity of nations to absorb and use knowledge - the major aspects of knowledge economy that previously were not studied properly (Godin, 2009).

NIS concept also promoted and set the tone for benchmarking exercises. Shifting the focus of NIS studies from components to activities forced a new look on one of the major research questions in this field: why some NISs perform better in innovation than others? (Edquist, 2011; Patel and Pavitt, 1994). This question gave rise to a variety of research works dedicated to comparative assessment of NISs using various methodological approaches. It also gained recognition with policymakers, who, aware of the crucial role of innovation in economic growth, invested a lot of effort into establishment of STI statistics on government level (Freeman and Soete, 2009).

Since then benchmarking became an essential element of NIS concept. It allowed not only to compare the relative performance of countries, but also to identify internal system mismatches and weaknesses that led to lagging innovative development (Niosi, 2002; OECD, 1997). Evaluation and comparison of NIS performance became a means of ensuring public accountability of governments and justifying the choice of policy instruments (Edquist and Zabala, 2009; Soete et al., 2010). However, selecting benchmarking method and gathering data of appropriate quality turned out to be a highly sophisticated challenge, a work that is still in progress.

Performance, Efficiency and Effectiveness

In order to properly address the issues of this paper, there is a need to provide meaning and draw a line between the following three terms: performance, efficiency and effectiveness. These terms are frequently used as synonyms in the literature devoted to benchmarking. Nevertheless, there are certain contextual differences between them that need to be highlighted to ensure clear understanding of benchmarking methods and instruments.

Benchmarking, in short, can be defined as relative performance evaluation (Bogetoft and Otto, 2010). Consequently, the word performance is the essential term in benchmarking studies. However, a common definition of this term is not established yet, and the existing formulations are rather elusive. (Edquist and Zabala, 2009) equal performance simply to the outputs of a system under evaluation. (Lebas, 1995) provides a more detailed definition: performance is about deploying and managing well the components of the causal model(s) that lead to the timely attainment of stated objectives within constraints specific to the firm and to the situation - that, although it is formulated for firm-level performance, can be easily extended to a broader context.

Efficiency and effectiveness, in this sense, may be viewed as two dimensions of performance. (Yu and Lin, 2008) even take them as basic indicators or measures of performance, and it is crucial to distinguish them and measure distinctly. The basic dictionary definitions of both terms signify the difference between them: the word efficient is used to describe something working properly; effective, in turn, is used for something working in a way that gives the needed results. Innovation efficiency, therefore, is about innovation process, while innovation effectiveness is about the results and outcomes of this process. And both are equally important for a better innovation performance.

(Mahroum and Al-Saleh, 2013) bring this discussion further, stating that efficiency and effectiveness are inversely related, meaning that a lot of effort is needed to achieve an increase in effectiveness and this effort may impede efficiency. In this sense, good governance of a system implies carefully maintaining a balance between efficiency and effectiveness in such a way that it would boost the overall performance.

This distinction can also be used to orient in the diverse world of innovation performance evaluation methods. Composite innovation index approach looks to compare the innovation performance of different economies - but what it really does is benchmarking their innovation effectiveness. It aggregates the results they get in each individual indicator of a composite and provides a final score - that, in fact, tells nothing about the process with led to these results (Edquist and Zabala-Iturriagagoitia, 2015; Nasierowski and Arcelus, 2012).

Innovation efficiency, or, in simple terms, the process of transferring innovation inputs into innovation outputs (Bogetoft and Otto, 2010a; Nasierowski and Arcelus, 2012), is measured by a different set of methods that do not simply look at the indicator values but investigate the links and interrelations between them. Econometric tests use theoretical propositions for choosing explanatory variables and formulating hypotheses (Cai, 2011; Guan and Chen, 2012). DEA is a popular non-parametric instrument that studies capabilities of units (countries or regions) in transforming innovation inputs into innovation outputs (Fare and Grosskopf, 2000; Filippetti and Peyrache, 2011; Nasierowski and Arcelus, 2012, 2003). It allows estimating the innovation efficiency frontier in the absence of a clear theoretical model of the innovation process (Guan and Chen, 2012; Hollanders and Celikel-Esser, 2007).

Although these studies are favorably different from the ones using indicator aggregates (since the choice of variables and specifications is theoretically driven), their use is limited by the high requirements for data quality and availability. It can also be suggested that composite indices and scoreboards have a relatively higher potential in communicating the state of affairs in innovation sphere to non-expert community since the descriptions of mathematical models of the studies are complicated and not always clear (Cornell University et al., 2017; Kotsemir, 2013).

In any case, all these methodological approaches provide a one-sided view of performance - either from efficiency or from effectiveness perspective. To capture both, scholars develop combinatory approaches, or instance, using composite index scores to run DEA tests. They will be described further in the following section.

NIS performance measurement techniques

The continuous search for new methods for benchmarking the innovation performance of NISs is partly stimulated by the fact that there is no agreed innovation measurement framework (Guan and Chen, 2012; Mahroum and Al-Saleh, 2013) - and most probably it will never be established due to the multidimensional and varied nature of STI-related activities (Patel and Pavitt, 1994). The creativity aspect of innovation is highly elusive and even irrational to some extent; hence, there is a permanent dissatisfaction in the ability of well-established indicators and approaches to capture it (Lhuillery et al., 2016).

(Carayannis and Provance, 2008) defined six types of indicators by stage of innovation process:

1. Input indicators - resources put into innovation process (intellectual, human and technological capital);

2. Process indicators - organizational and process management systems;

3. Performance indicators - results of organizational management;

4. Output indicators - short-term results of innovative activity (new product sales, patents, patent quotes etc.);

5. Outcome indicators - medium- to long-term results of innovative activity (firm profit margins, market share, firm growth rate, changing technological standards etc.);

6. Impact indicators - long-term sustained advantage from innovation.

In measuring innovative performance, scholars generally focus on input and output indicators as two basic efficiency components. Finding a proper proxy for innovation outputs is a substantial measurement problem (Cai, 2011; Edquist and Zabala, 2009; Grupp and Mogee, 2004). (Lundvall, 2010) stresses that the indicators chosen for measuring NIS performance should reflect the efficiency and effectiveness in producing, diffusing and exploiting economically useful knowledge. Such indicators are not easily developed as they usually can only partly describe the process of innovation - and data collection procedures used to obtain such indicators are generally complex.

As such, innovation outputs of NIS may exert significant influence on the overall national capacity - its economic growth, military power or social well-being - but this does not mean that NIS can be measured by them, since its contribution to them is indirect. Thus, (Edquist and Zabala, 2009) warn against confusing NISs and countries in which they operate, but they also disapprove the approach used by many scholars (like (Cai, 2011)) that treats NIS as a specific sector of the economy producing patents and publications.

Indeed, using such intermediary factors like patents, publications or citations as outputs of NIS raised considerable criticism and skepticism. Considering patents, on the one hand, it is argued they may serve as indicators of inventions rather than innovations that reach the market; in other words, they are not always used in creation of innovative products and processes (Edquist and Zabala, 2009). On the other hand, though patenting activities may generate knowledge spillovers and create certain incentives to innovate, this relation is largely sector-dependent (Holgersson and Kekezi, 2017).

Thus, constructing indicators for measuring NIS performance became a challenge and a necessary task to enable valid benchmarking exercises. OECD took it as a priority and developed a set of indicators for core knowledge flows that, supplemented by OECD manuals for collecting data on R&D (Frascati Manual) and innovation (Oslo Manual), provided ground for statistical comparisons of NISs. European Union (EU) started an initiative based on Oslo Manual to develop a common methodological approach for Community Innovation Surveys (CIS). These surveys are harmonized to provide unified data on innovation activities of different sectors and regions in EU, giving comprehensive understanding of NIS development and performance in participating countries.

Qualitative benchmarking studies started from simple comparisons of selected indicators (Patel and Pavitt, 1994) but then evolved into more complex measurement techniques. Nowadays popular methods can be roughly classified into two groups: composite innovation index approaches and economic modelling approaches (Grupp and Mogee, 2004), where non-parametric efficiency frontier methods (such as DEA) have lately gained much popularity and present a separate specific category (Cai, 2011). There are also many examples of studies where these approaches are mixed to mitigate their inherent drawbacks, thus seen as complementary rather that substitutive (Cai, 2011; Filippetti and Peyrache, 2011). Composite innovation index approach is of particular interest here since it has gained substantial popularity among policymakers. Below is a review of this approach, its benefits and drawbacks and practical uses.

More advanced ways to interpret innovation efficiency (such as presented in (Edquist, 2011)) recognize the non-linear nature of the input-output relationship and explore the determinants of innovation process. The complexity of such concepts implies using mixes of methodological approaches to capture all the differences in NIS performance. In (Edquist, 2011) itself descriptive diagnostic analysis was combined with the use of statistical data. In several studies (Filippetti and Peyrache, 2011; Hollanders and Celikel-Esser, 2007; Matei and Aldea, 2012; Nasierowski and Arcelus, 2012) the existing composite innovation indices were used as a basis for calculation of DEA efficiency scores. GII reports also contain a statistical audit chapter that includes a similar exercise to estimate the difference of GII scores from the efficiency frontier (Saisana et al., 2017). Other works are focused on enhancing DEA calculation by creating two-step models where second step involves implementation of other data analysis techniques (usually, econometrics) to the DEA results (Cai, 2011; Guan and Chen, 2012; Nasierowski and Arcelus, 2003).

There are also studies that use more advanced analytical instruments than simple averaging to improve the weighting scheme of indices. (Carayannis et al., 2017) uses multiple criteria decision analysis (MCDA) - a class of analytical methods designed for solving decision and planning problems limited by multiple criteria using hierarchies, outranking etc. (Tudela et al., 2006) - to assign new weights to EIS scores that would better reflect the preferences of innovation stakeholders. Specifically, authors use combine two MCDA methods - Analytical Hierarchy Process (AHP) method and TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) method in a two-step methodology to create a new weighting scheme and ranking.

(Carayannis et al., 2017; Crespo and Crespo, 2016) implements fuzzy-set qualitative comparative analysis to data from GII to test the causal relations between innovation enablers and innovation performance. Qualitative comparative analysis (QCA) is a data analysis method that identifies logical conclusions supporting a data set (Rihoux, 2006). Usage of fussy sets is one of the modifications to the original QCA prolonging and expanding its logic that allowed to split the GII data into groups (high-income/low-income countries) (Crespo and Crespo, 2016).

This work is based on the methodological frameworks of the aforementioned research and aims to contribute to them by comparing the most prominent composite innovation indices - GII and EIS - in terms of how well they reflect the efficiency of NISs they study in transferring innovation inputs into innovation outputs. Particular attention is payed to the case of Russia - one of the examples of inefficient resource spending - that is to be interpreted using the results of the analysis.

Composite innovation index approach

It is generally accepted that a single indicator is insufficient to explain innovation process; therefore, for making conclusions that are more comprehensive, NISs are benchmarked through various indicator aggregates (Carayannis and Provance, 2008; Lundvall, 2010). Composite innovation indices, defined as a concise quantitative indicator of the innovative capability of institutions, researchers, businesses and territories in the selected areas of research (Wonglimpiyarat, 2010) are well-known examples of such studies (Nasierowski and Arcelus, 2012).

There are obvious advantages in this kind of approach: firstly, the results of such studies are usually presented in a user-friendly form of rankings and scoreboards that are evident and convenient and thus can be used by a wide audience including non-experts (Crespo and Crespo, 2016; Saltelli et al., 2005). Secondly, indicator aggregates help systematizing segmental information provided by single indicators (Davis et al., 2012; Hollanders and Janz, 2013; Vrtesy, 2016). Finally, the advancing quality of national innovation statistics made it possible to index large numbers of diverse countries (for instance, the latest edition of the Global Innovation Index (GII) studies 127 economies ((Cornell University et al., 2017)).

However, the drawbacks of composite innovation index approach stem directly from its benefits. On the one hand, it is constantly criticized for oversimplification and inability to capture specific features of studied NISs, which root from several common assumptions embedded in the structure of such indexes. For instance, (Archibugi et al., 2009; Holgersson and Kekezi, 2017) argue that using countries as units of analysis ignores the heterogeneity of particular regions whose individual performance and rate of development are usually far from being equal.

Scholars also find pretexts for criticism in obscure indicator selection procedures and weighting schemes used for construction of indexes, thus blaming them for susceptibility to manipulation (Archibugi et al., 2009; Grupp and Mogee, 2004; Grupp and Schubert, 2010; Kozowski, 2015). In case of equal weighting scheme (used, for instance, in European Innovation Scoreboard (EIS)), transparency is said to be achieved at the expense of biases and possible economic implausibility (Carayannis et al., 2017; Grupp and Mogee, 2004; Grupp and Schubert, 2010).

(Davis et al., 2012) expand the manipulation argument, claiming that composite indexes should be viewed as a technology of global governance since their composition is more likely to be based on certain ideological propositions rather than theoretical premises. This statement refers to the inherent issue of concepts of technological development measurement that are acceptable for rich countries where they were developed but inappropriate for poorer ones (James, 2006).

In this sense, constructing composite innovation indicators should be considered an art rather than an exact science (Grupp and Schubert, 2010), and many scholars (as well as international organizations) made efforts to create an index that would mitigate the inherent drawbacks better than others. (Blackman et al., 1973) made one of the first attempts to build an innovation index on sectoral level using factor analysis as weighting method. (Holgersson and Kekezi, 2017) is a curious example: their index includes so-called variables for non-innovation - unemployment and crime rates - that negatively affect the final innovation performance.

(Wonglimpiyarat, 2010) developed an index focusing specifically on the innovative capacity (or capability) of nations that she defined as the ability to make major improvements and modifications to existing technologies, and to create new technologies. The structure of the national innovation capacity index includes five levels of capability (organization innovation capability; process innovation capability; service innovation capability; product innovation capability, marketing innovation capability). It is based on the innovation competitiveness factors of International Institute for Management Development (IMD) and World Economic Forum (WEF).

(Carayannis and Provance, 2008) created a firm-level innovation index framework consisting of three factors for organizational innovation (3P) that, in turn, each included three key dimensions:

1. Posture - the position of an organization within the innovation system (regional, industrial, national);

2. Propensity - a reflection of processes, routines and capabilities, including the organizational culture;

3. Performance: - the lasting results of innovation: outputs (immediate results of innovation), outcomes (mid-range results such as revenues contributed by new products) and impacts (lasting, long-range benefits).

(Mahroum and Al-Saleh, 2013) developed an Innovation Efficacy Index - an index aimed at capturing both efficiency and effectiveness of NISs

The index frameworks reviewed above, however, can be of interest only as the examples of disparate research efforts in the field that influence the advancement of composite index approach but are not included into international practice. Therefore, the greatest attention is paid to composite indexes developed by global organizations. The common advantages of such indexes include verified data, wide country coverage and sound m to data processing and indicator selection that involves multiple testing. Such careful treatment of index development procedures can be explained by significant public attention that composite innovation indexes attract and the need to maintain reputation for such organizations. In this paper, two of the most prominent innovation indexes - GII and EIS - are investigated, thus, their brief background and structure are studied below.


First published in 2007, GII is a result of collaborative work of three major organizations - INSEAD, Cornell University and World Intellectual Property Organization (WIPO). There are more than 30 data sources used for index calculation, including large high-reputable organizations such as International Telecommunication Union (ITU), World Bank (WB), WEF.

Several specific characteristics of GII composition can be identified:

GII has a complex three-level structure - the index consists of seven sub-indexes, (or pillars, as they are called in GII) that are separated into sub-pillars (three sub-pillars into each pillar), which, in turn, are composed of individual indicators. Both pillar and sub-pillar scores are calculated as weighted averages of sub-pillar scores and individual indicators respectively.

The composition of the index embeds the simplest understanding of efficiency - the Input / Output ratio. Seven pillars are divided between Input and Output dimensions of GII (first five are input pillars, last two - output pillars) that constitute the Innovation Efficiency Ratio - the additional GII measure.

One of the focal points for GII developers is to create an overarching index that would cover as many countries and aspects of innovation performance as possible. Therefore, the total number of indicators in the latest GII edition is 81 and their selection is highly diverse, ranging from commonly used GERD to rather unusual ones, such as Wikipedia edits and Youtube uploads. This is done, to a certain extent, at the expense of overall index quality: for example, although the GII data coverage improves from year to year (Cornell University et al., 2017), only 4 out of 127 countries included in GII 2017 ranking have zero missing values, while 9 have more than 20 missing values.

Some indicators included in the GII composition are themselves composite indexes calculated by various organizations: for instance, indicators 1.1.1. and 1.1.2. - Political stability and absence of violence/terrorism and Government effectiveness - are World Bank Worldwide Governance indexes. Compared to simple indictors based directly on raw data, such structural elements may introduce additional bias to the already inevitably imperfect composite.

The current (2017) edition of GII has the following structure:

1. Institutions;

1.1. Political environment;

1.1.1. Political stability and absence of violence/terrorism index (WB);

1.1.2. Government effectiveness index (WB);

1.2. Regulatory environment;

1.2.1. Regulatory quality index (WB);

1.2.2. Rule of law index (WB);

1.2.3. Cost of redundancy dismissal (sum of notice period and severance pay for redundancy dismissal (in salary weeks, averages for workers with 1, 5, and 10 years of tenure, with a minimum threshold of 8 weeks) (WB);

1.3. Business environment;

1.3.1. Ease of starting a business (distance to frontier) (WB);

1.3.2. Ease of resolving insolvency (distance to frontier) (WB);

1.3.3. Ease of paying taxes (distance to frontier) (WB);

2. Human capital and research;

2.1. Education;

2.1.1. Government expenditure on education (% of GDP) (UNESCO);

2.1.2. Government expenditure on education per pupil, secondary (% of GDP per capita) (UNESCO);

2.1.3. School life expectancy, primary to tertiary education (years) (UNESCO);

2.1.4. PISA average scales in reading, mathematics, and science (OECD PISA);

2.1.5. Pupil-teacher ratio, secondary (UNESCO)

2.2. Tertiary education;

2.2.1. School enrolment, tertiary (% gross) (UNESCO);

2.2.2. Tertiary graduates in science, engineering, manufacturing, and construction (% of total tertiary graduates) (UNESCO);

2.2.3. Tertiary-level inbound mobility rate (%) (UNESCO);

2.3. Research and development (R&D);

2.3.1. Researchers, full-time equivalence (FTE) (per million population) (UNESCO);

2.3.2. Gross expenditure on R&D (GERD) (% of GDP);

2.3.3. Average expenditure of the top 3 global companies by R&D, million $US (EU JRC);

2.3.4. Average score of the top 3 universities at the QS world university ranking (QS Quacquarelli Symonds Ltd);

3. Infrastructure;

3.1. Information and communication technologies (ICTs);

3.1.1. ICT access index (ITU);

3.1.2. ICT use index (ITU);

3.1.3. Government's online service index (United Nations (UN));

3.2. General infrastructure;

3.2.1. Electricity output (kWh per capita) (International Energy Agency (IEA));

3.2.2. Logistics Performance Index (World Bank and Turku School of Economics);

3.2.3. Gross capital formation (% of GDP) (International Monetary Fund (IMF));

3.3. Ecological sustainability;

3.3.1. GDP per unit of energy use (2010 PPP$ per kg of oil equivalent) (IEA);

3.3.2. Environmental Performance Index (Yale University and Columbia University);

3.3.3. ISO 14001 Environmental management systems -- Requirements with guidance for use: Number of certificates issued (per billion PPP$ GDP) (International Organization for Standardization (ISO));

4. Market sophistication;

4.1. Credit;

4.1.1. Ease of getting credit (distance to frontier) (WB);

4.1.2. Domestic credit to private sector (% of GDP) (IMF);

4.1.3. Microfinance institutions: Gross loan portfolio (% of GDP) (IMF);

4.2. Investment;

4.2.1. Ease of protecting minority investors (distance to frontier) (WB);

4.2.2. Market capitalization of listed domestic companies (% of GDP) (WB);

4.2.3. Venture capital per investment location: Number of deals (per billion PPP$ GDP) (Thomson Reuters);

4.3. Trade, competition and market scale;

4.3.1. Tariff rate, applied, weighted mean, all products (%) (WB);

4.3.2. Intensity of local competition (average answer to the survey question: In your country, how intense is competition in the local markets? [1 = not intense at all; 7 = extremely intense]) (WEF);

4.3.3. Domestic market size as measured by GDP, billion PPP$ (WB);

5. Business sophistication;

5.1. Knowledge workers;

5.1.1. Employment in knowledge-intensive services (% of workforce) (International Labour Organization Statistic Database (ILOSTAT);

5.1.2. Firms offering formal training (% of firms) (WB);

5.1.3. GERD: Performed by business enterprise (% of GDP) (UNESCO);

5.1.4. GERD: Financed by business enterprise (% of total GERD);

5.1.5. Females employed with advanced degrees, % total employed (25+ years old) (ILOSTAT);

5.2. Innovation linkages;

5.2.1. State of cluster development (average answer to the survey question: In your country, to what extent do businesses and universities collaborate on research and development (R&D)? [1 = do not collaborate at all; 7 = collaborate extensively]) (WEF);

5.2.2. University/industry research collaboration (average answer to the survey question on the role of clusters in the economy: In your country, how widespread are well-developed and deep clusters (geographic concentrations of firms, suppliers, producers of related products and services, and specialized institutions in a particular field)? [1 = nonexistent; 7 = widespread in many fields]) (WEF);

5.2.3. GERD: Financed by abroad (% of total GERD) (UNESCO);

5.2.4. Joint ventures/strategic alliances: Number of deals, fractional counting (per billion PPP$ GDP) (Thomson Reuters);

5.2.5. Number of patent families filed by residents in at least two offices (per billion PPP$ GDP) (WIPO);

5.3. Knowledge absorption;

5.3.1. Charges for use of intellectual property n.i.e., payments (%, total trade) (World Trade Organization (WTO));

5.3.2. High-tech net imports (% of total trade) (UN COMTRADE Database);

5.3.3. Telecommunications, computers, and information services imports (% of total trade) (WTO);

5.3.4. Foreign direct investment (FDI), net inflows (% of GDP, three-year average) (IMF);

5.3.5. Researchers in business enterprise (%) (UNESCO);

6. Knowledge and technology outputs;

6.1. Knowledge creation;

6.1.1. Number of resident patent applications filed at a given national or regional patent office (per billion PPP$ GDP) (WIPO);

6.1.2. Number of international patent applications filed by residents at the Patent Cooperation Treaty (per billion PPP$ GDP) (WIPO);

6.1.3. Number of utility model applications filed by residents at the national patent office (per billion PPP$ GDP) (WIPO);

6.1.4. Number of scientific and technical journal articles (per billion PPP$ GDP) (Clarivate Analytics);

6.1.5. Citable documents H index (number of published articles (H) that have received at least H citations) (SCImago Journal & Country Rank);

6.2. Knowledge impact;

6.2.1. Growth rate of GDP per person engaged (constant 1990 PPP$) (Conference Board Total Economy Database);

6.2.2. New business density (new registrations per thousand population 15-64 years old) (WB);

6.2.3. Total computer software spending (% of GDP) (ICT);

6.2.4. ISO 9001 Quality management systems-- Requirements: Number of certificates issued (per billion PPP$ GDP) (ISO);

6.2.5. High-tech and medium-high-tech output (% of total manufactures output) (United Nations Industrial Development Organization (UNIDO));

6.3. Knowledge diffusion;

6.3.1. Charges for use of intellectual property n.i.e., receipts (%, total trade) (WTO);

6.3.2. High-tech net exports (% of total trade) (UN COMTRADE Database);

6.3.3. Telecommunications, computers, and information services exports (% of total trade) (WTO);

6.3.4. Foreign direct investment (FDI), net outflows (% of GDP, three-year average) (FDI);

7. Creative outputs;

7.1. Intangible assets;

7.1.1. Number of trademark applications issued to residents at a given national or regional office (per billion PPP$ GDP) (WIPO);

7.1.2. Number of designs contained in industrial design applications filed at a given national or regional office (per billion PPP$ GDP) (WIPO);

7.1.3. ICTs and business model creation (average answer to the question: In your country, to what extent do ICTs enable new business models? [1 = not at all; 7 = to a great extent]) (WEF);

7.1.4. ICTs and organizational model creation (average answer to the question: In your country, to what extent do ICTs enable new organizational models (e.g., virtual teams, remote working, telecommuting) within companies? [1 = not at all; 7 = to a great extent]) (WEF);

7.2. Creative goods and services;

7.2.1. Cultural and creative services exports (% of total trade) (WTO);

7.2.2. Number of national feature films produced (per million population 15-69 years old) (UNESCO);

7.2.3. Global entertainment and media market (per thousand population 15-69 years old) (PricewaterhouseCoopers, PwC);

7.2.4. Printing and publishing manufactures output (% of manufactures total output) (UNIDO);

7.2.5. Creative goods exports (% of total trade) (UN COMTRADE Database);

7.3. Online creativity;

7.3.1. Generic top-level domains (gTLDs) (per thousand population 15-69 years old) (ZookNIC Inc, UN);

7.3.2. Country-code top-level domains (ccTLDs) (per thousand population 15-69 years old) (ZookNIC Inc, UN);

7.3.3. Wikipedia yearly edits by country (per million population 15-69 years old) (Wikimedia Foundation, UN);

7.3.4. Number of video uploads on YouTube (scaled by population 15-69 years old) (Google, UN).


EIS is a composite index for comparative analysis of innovation performance that was introduced as part of EU Lisbon Strategy The Lisbon Strategy was born in 2000 as a response to the increased competition with leading economies - USA and Japan - a means to overcome the lack of technological capacity and innovation. It was an action and development plan set for EU member countries aimed at renewal and sustainability of economy, society and environment (Rodriguez et al., 2010). (Hollanders, 2009). It was, therefore, primarily designed as a policymaking tool aimed at efficient monitoring of the achievement of Strategy targets (Rodriguez et al., 2010). A pilot study was launched in 2000, followed by full versions published every year ever since (Hollanders, 2009). Maastricht Economic and Social Research Institute on Innovation and Technology (MERIT) under the guidance of European Commission is responsible for preparation of annual EIS reports. The reports are supplemented by an interactive visualization tool available on the EIS webpage. In 2010, the title of the report was changed to Innovation Union Scoreboard (IUS), but in 2016 the original title was returned.

Several features of EIS can be identified:

Interestingly, ranking countries in terms of their performance initially wasn't the purpose of EIS. It was focused more on visualization of the progress made by member states and identification of problems and shortfalls (Hollanders, 2009). This standpoint is reflected in the name of the composite - a scoreboard, not an index.

Focus on tracking progress made by member states in terms of innovation performance requires ensuring continuity of EIS methodology. However, several significant changes have been made since the creation of EIS: the number of indicators increased from 16 in 2000 pilot study to 27 in the latest edition; the country coverage also improved (from 17 in 2000 to 36 in 2017). Three major revisions of EIS methodology took place in 2008, 2010 and 2016 - first one was conducted in response to the emerged criticism (the main point was that EIS was overly focused on high-tech sectors and did not take into account the new forms of innovation (Hollanders, 2009)), second one - to broaden the scope of EIS and include research to the measurement framework in accordance with the new Innovation Union initiative. The last revision incorporates the recommendations and critical observations expressed on 2016 OECD Blue Sky Forum (Hollanders and Es-Sadki, 2017). The latest edition of EIS report provides time series dating back to 2010.

While GII calculation is based on data from various sources, EIS methodology relies predominantly on data from Eurostat - the official intergovernmental statistical body of the EU (after the EIS methodology revision in 2017 the share of Eurostat-based indicators even increased from 0,6 in 2016 (15 out of 25) to approximately 0,7 in 2017 (19 out of 27)).

Prior to 2016, EIS developers avoided using composites as indicators for EIS calculation. The 2017 revision, however, introduced such an indicator - namely, the Motivational Index (indicator 1.3.2.) developed by Global Entrepreneurship Monitor (GEM) based on opinion survey data. Recognizing the lower quality of this kind of measures, EIS developers, though, consider the chosen index to best capture the link between entrepreneurship and innovation (Hollanders and Es-Sadki, 2017).

The composition of EIS 2017 measurement framework looks as follows:

1. Framework conditions;

1.1. Human resources;

1.1.1. New doctorate graduates per 1000 population aged 25-34 (Eurostat);

1.1.2. Percentage population aged 25-34 having completed tertiary education (Eurostat);

1.1.3. Percentage population aged 25-64 participating in lifelong learning (Eurostat);

  • Types of the software for project management. The reasonability for usage of outsourcing in the implementation of information systems. The efficiency of outsourcing during the process of creating basic project plan of information system implementation.

    [566,4 K], 14.02.2016

  • The concept of transnational companies. Finding ways to improve production efficiency. International money and capital markets. The difference between Eurodollar deposits and ordinary deposit in the United States. The budget in multinational companies.

    [34,2 K], 13.04.2013

  • Investigation of the subjective approach in optimization of real business process. Software development of subject-oriented business process management systems, their modeling and perfection. Implementing subject approach, analysis of practical results.

    [18,6 K], 14.02.2016

  • "EPAM Systems". , . -PMC. , . .

    [50,5 K], 26.03.2012

  • Leadership and historical approach. Effect, which leader makes on group. Developing leadership skills. Exercise control as function of the leader is significant difference between managers and leaders. Common points of work of leader and manager.

    [37,7 K], 13.02.2012

  • Logistics as a part of the supply chain process and storage of goods, services. Logistics software from enterprise resource planning. Physical distribution of transportation management systems. Real-time system with leading-edge proprietary technology.

    [15,1 K], 18.07.2009

  • The main idea of Corporate Social Responsibility (CSR). History of CSR. Types of CSR. Profitability of CSR. Friedmans Approach. Carrolls Approach to CSR. Measuring of CRS. Determining factors for CSR. Increase of investment appeal of the companies.

    [98,0 K], 11.11.2014

  • Different nations negotiate with different styles. Those styles are shaped by the nations culture, political system and place in the world. African Approaches to Negotiation. Japanese, European, Latin American, German and British styles of Negotiation.

    [261,2 K], 27.10.2010

  • Evaluation of urban public transport system in Indonesia, the possibility of its effective development. Analysis of influence factors by using the Ishikawa Cause and Effect diagram and also the use of Pareto analysis. Using business process reengineering.

    [398,2 K], 21.04.2014

  • The concept, essence, characteristics, principles of organization, types and features of the formation of groups of skilled workers. The general description of ten restrictions which disturb to disclosing of potential of group staff and its productivity.

    [29,7 K], 26.07.2010

, , ..