Efficiency of national innovation systems through the prism of composite innovation indexes: do they tell the same story?

Innovation systems concept. Performance, Efficiency and Effectiveness. NIS performance measurement techniques. Composite innovation index approach. Statistically significant correlation between GII and EIS efficiency scores. Works of Christopher Freeman.

Ðóáðèêà Ìåíåäæìåíò è òðóäîâûå îòíîøåíèÿ
Âèä êóðñîâàÿ ðàáîòà
ßçûê àíãëèéñêèé
Äàòà äîáàâëåíèÿ 01.09.2018
Ðàçìåð ôàéëà 328,4 K

Îòïðàâèòü ñâîþ õîðîøóþ ðàáîòó â áàçó çíàíèé ïðîñòî. Èñïîëüçóéòå ôîðìó, ðàñïîëîæåííóþ íèæå

Ñòóäåíòû, àñïèðàíòû, ìîëîäûå ó÷åíûå, èñïîëüçóþùèå áàçó çíàíèé â ñâîåé ó÷åáå è ðàáîòå, áóäóò âàì î÷åíü áëàãîäàðíû.

1.2. Attractive research systems;

1.2.1. International scientific co-publications per million population (Eurostat);

1.2.2. Scientific publications among the top-10% most cited publications worldwide as percentage of total scientific publications of the country (CWTS Leiden University);

1.2.3. Foreign doctorate students as a percentage of all doctorate students (Eurostat);

1.3. Innovation-friendly environment;

1.3.1. Broadband penetration (Eurostat);

1.3.2. Opportunity-driven entrepreneurship (Motivational index) (GEM);

2. Investments;

2.1. Finance and support;

2.1.1. R&D expenditure in the public sector (% of GDP) (Eurostat);

2.1.2. Venture capital (% of GDP) (Invest Europe);

2.2. Firm investments;

2.2.1. R&D expenditure in the business sector (% of GDP) (Eurostat);

2.2.2. Non-R&D innovation expenditures (% of turnover) (Eurostat);

2.2.3. Enterprises providing training to develop or upgrade ICT skills of their personnel (Eurostat);

3. Innovation activities;

3.1. Innovators;

3.1.1. SMEs introducing product or process innovations (% of SMEs) (Eurostat);

3.1.2. SMEs introducing marketing or organizational innovations (% of SMEs) (Eurostat);

3.1.3. SMEs innovating in-house (percentage of SMEs) (Eurostat);

3.2. Linkages;

3.2.1. Innovative SMEs collaborating with others (% of SMEs) (Eurostat);

3.2.2. Public-private co-publications per million population (Eurostat);

3.2.3. Private co-funding of public R&D expenditures (% of GDP) (Eurostat);

3.3. Intellectual assets;

3.3.1. PCT patent applications per billion GDP (in PPS) (OECD);

3.3.2. Trademark applications per billion GDP (in PPS) (EUIPO and WIPO);

3.3.3. Design applications per billion GDP (in PPS) (EUIPO);

4. Impacts;

4.1. Employment impacts;

4.1.1. Employment in knowledge-intensive activities (% of total employment) (Eurostat);

4.1.2. Employment in fast-growing enterprises (% of total employment) (Joint Research Centre (JRC));

4.2. Sales impacts;

4.2.1. Exports of medium and high technology products as a share of total product exports (Eurostat and UN COMTRADE);

4.2.2. Knowledge-intensive services exports as % of total services exports (JRC);

4.2.3. Sales of new-to-market and new-to-firm innovations as % of turnover (Eurostat).

Methodology

· What is DEA

In relative performance evaluations, efficiency frontier estimation is now widely used (Bogetoft and Otto, 2010b). Frontier production function estimation is a measurement technique first introduced by (Farrell, 1957) that provided base for future rapid development of a variety of competing approaches (Hjalmarsson et al., 1996). Table 2 summarizes the taxonomy of these methods.

Table 2: A taxonomy of frontier methods

Deterministic

Stochastic

Parametric

Corrected Ordinary Least Squares (COLS)

Stochastic Frontier analysis (SFA)

Non-parametric

Data Envelopment Analysis (DEA)

Stochastic Data Envelopment analysis (SDEA)

Source: (Bogetoft and Otto, 2010b)

DEA is often preferred to other methods due to flexibility of its functional form and absence of specific functional restrictions, allowing direct calculations of efficiency scores from the actual data (Bogetoft and Otto, 2010b; Emrouznejad and Yang, 2018; Hjalmarsson et al., 1996; Hollanders and Celikel-Esser, 2007). It is often chosen for benchmarking exercises where composite innovation index approach is combined with frontier estimations since there is a lack of clear underlying theoretical model of the innovation process (it is a so-called “black box”) (Filippetti and Peyrache, 2011; Hollanders and Celikel-Esser, 2007; Matei and Aldea, 2012; Nasierowski and Arcelus, 2012; Rousseau and Rousseau, 1997). For this reason, this method is chosen for the given study.

DEA was first proposed and developed by Charnes, Cooper and Rhodes as a means to solve the problem of unsatisfactory results derived from econometric benchmarking exercises with multiple and heterogeneous input and output variables (Kotsemir, 2013). The key terms of this method are presented in Table 3.

Table 3: Terms and definitions of DEA

Term

Definition

Decision Making Unit (DMU)

The object of analysis capable of distribution of resources and prioritization of results. Assumed to be homogenous (using same resources to get similar results). Considered as efficient if the efficiency score is equal to 1 and if all slacks are 0.

Inputs and Outputs

Data on resources (inputs) and results (outputs) for each DMU serving as parameters for the analysis. The general rule for the number of inputs and outputs is as follows:

Pareto-Koopmans efficiency

A DMU is fully efficient if and only if it is not possible to improve any input or output without worsening some other input or output.

Can be presented as a simple ratio:

Efficiency score

The maximum ratio of weighted outputs to weighted inputs ranging from 0 to 1 that reflects the relative efficiency of each DMU.

Slacks

Input excesses and output shortfalls that lead to inefficiency

Weights

The coefficients assigned to the summarized inputs and outputs that maximize the efficiency score

Peers

The DMUs with efficiency scores = 1 that serve as references for less efficient ones

Reference set

Also called a “peer group” - a set of DMUs that served as benchmarks for inefficient units

Source: (Bogetoft and Otto, 2010b; Charnes et al., 1979; Cooper et al., 2007; Wen, 2015).

The graphical representation of the method is shown on Figure 1. As can be seen from the graph, the basic idea behind DEA is to estimate such an efficiency frontier (the convex line) that would “envelop” all DMUs, so that efficient units (A, B, C, D, E, F) would be placed on the frontier and the distance between the frontier and the DMU (DD1) would indicate inefficiency. This envelopment can be either output-oriented (graph a) - maximization of outputs) or input-oriented (graph b) - minimization of inputs).

Figure 1: Graphical depiction of DEA

Source: (Rousseau and Rousseau, 1997).

DEA model specifications vary not only in terms their orientation but also by the assumptions they embed. The first proposed DEA model - CCR model (named after its authors Charnes, Cooper and Rhodes) - was built on the assumption of constant returns to scale (CRS) of DMU activities (Charnes et al., 1979). It evaluates the proportional (or radial) efficiency of each DMU. Soon afterwards, BCC (Banker-Charnes-Cooper) model was introduced, implying variable returns to scale (VRS) (Figure 2). BCC model relaxes the proposition of original CCR model and thus allows for the possibility that the average productivity at the most productive scale size may not be achievable for other scale sizes (Banker et al., 1984).

Figure 2: Concept of efficiency and returns to scale

Source: (Banker et al., 1984).

(Byrnes et al., 1984) introduced a measure of scale efficiency - a simple ratio that can be calculated knowing CRS and VRS efficiencies (Guan and Chen, 2012):

The information given by CCR and BCC efficiency scores is, however, incomplete. With scores ranging strictly between 0 and 1 it provides an efficiency rating only for inefficient units, leaving all efficient units with a unity score. When more than one unit is placed on the efficiency frontier (see Figure 2) it is impossible to distinguish between efficient DMUs that are more or less influential as peers for other DMUs. For a substantive analysis, it is beneficious to rank the efficient units themselves. For this, (Andersen and Petersen, 1993) introduced radial super-efficiency model where data on efficient DMUs is eliminated alternately from the solution set to see whether they can be dominated by some reference units (Bogetoft and Otto, 2010b; Cooper et al., 2007).

The interpretation of the results for this test is similar to original DEA models - the DMUs with efficiency scores equal to or greater than 1 are interpreted as efficient while the rest as inefficient - with one exception: super-efficiency calculation allows the presence of infinite scores. These units are generally defined as “hyper-efficient” - meaning that there are no units against which to compare them, or that they have no efficient “peers” (Bogetoft and Otto, 2010b; Kutin et al., 2017). This calculation, therefore, adds to the previous one, enabling a sufficient ranking of countries.

Approach

The additional aim of this paper is to identify specific advantages and disadvantages of Russian position in the indexes in question. Since Russia is not included in EIS ranks, its scores had to be recalculated on Russian data, which was the initial step of the study. Therefore, the following steps represent the full algorithm of the study:

1) Calculating EIS scores for Russia in 2016 using EIS methodology (Hollanders & Es-Sadki, 2017):

a. Gathering data;

Note: 16 out of 27 indicator values for Russia are presented in “Annex H: International Data” of the original 2017 EIS report (Hollanders & Es-Sadki, 2017). Data for the rest 11 indicators was collected from relevant sources (see Table 4).

b. Identifying outliers;

Note: “Positive outliers are identified as those country scores which are higher than the mean across all countries plus twice the standard deviation. Negative outliers are identified as those country scores which are lower than the mean across all countries minus twice the standard deviation” (Hollanders & Es-Sadki, 2017).

c. Determining maximum and minimum values;

d. Performing square root transformation for highly skewed data (indicators 1.3.2, 3.2.2, 3.3.1 and 3.3.2, see Table 4);

e. Re-scaling (normalizing) scores;

Re-scaling formula:

- re-scaled score;

- raw value;

- the highest score found for the whole time period within all countries (excluding positive outliers);

- the lowest score found for the whole time period within all countries (excluding negative outliers);

f. Calculating composite scores for dimensions and Summary Innovation Index.

Note: dimension composite scores and summary index are arithmetic means of re-scaled scores of every indicator.

The values, normalized scores and data sources are presented in Table 4.

Table 4: Composite score calculation for Russia.

Innovation dimension / indicator

Values

Scores

Source

SUMMARY INNOVATION INDEX

0,284

1. FRAMEWORK CONDITIONS

0,394

1.1 Human resources

0,738

1.1.1 New doctorate graduates per 1000 population aged 25-34

1,430

0,404

EIS2017

1.1.2 Population completed tertiary education

53,5

0,962

EIS2017

1.1.3 Lifelong learning

24

0,848

Èíäèêàòîðû îáðàçîâàíèÿ 2017, c.40

1.2 Attractive research systems

0,084

1.2.1 International scientific co-publications

89,9

0,024

EIS2017

1.2.2 Scientific publications among top 10% most cited

3,51

0,133

EIS2017

1.2.3 Foreign doctorate students

6,774

0,093

Rosstat

1.3 Innovation-friendly environment

0,345

1.3.1 Broadband penetration

10,6

0,424

Èíäèêàòîðû öèôðîâîé ýêîíîìèêè: 2017, c. 117

1.3.2 Opportunity-driven entrepreneurship

1,492

0,265

GEM

2. INVESTMENTS

0,276

2.1 Finance and support

0,171

2.1.1 R&D expenditure in the public sector

0,46

0,339

EIS2017

2.1.2 Venture capital investments

0,001

0,003

RVCA; World Bank

2.2 Firm investments

0,380

2.2.1 R&D expenditure in the business sector

0,71

0,270

EIS2017

2.2.2 Non-R&D innovation expenditure

0,929

0,490

Èíäèêàòîðû èííîâàöèîííîé äåÿòåëüíîñòè 2017, c. 185; Rosstat

2.2.3 Enterprises providing ICT training

¯

¯

3. INNOVATION ACTIVITIES

0,144

3.1 Innovators

0,000

3.1.1 SMEs with product or process innovations

4,7

0,000

EIS2017

3.1.2 SMEs with marketing or organisational innovations

2,4

0,000

EIS2017

3.1.3 SMEs innovating in-house

7,66

0,000

Èíäèêàòîðû èííîâàöèîííîé äåÿòåëüíîñòè 2017, c. 59

3.2 Linkages

0,386

3.2.1 Innovative SMEs collaborating with others

1

0,000

EIS2017

3.2.2 Public-private co-publications

0,9

0,023

EIS2017

3.2.3 Private co-funding of public R&D expenditures

0,07

0,749

EIS2017

3.3 Intellectual assets

0,127

3.3.1 PCT patent applications

0,5385

0,243

EIS2017

3.3.2 Trademark applications

2,4739

0,121

EIS2017

3.3.3 Design applications

0,17

0,017

EIS2017

4. IMPACTS

0,421

4.1 Employment impacts

1,000

4.1.1 Employment in knowledge-intensive activities

33,59

1,000

Rosstat

4.1.2 Employment fast-growing firms innovative sectors

¯

¯

4.2 Economic effects

0,228

4.2.1 Medium & high tech product exports

13

0,000

EIS2017

4.2.2 Knowledge-intensive services exports

65,4

0,649

EIS2017

4.2.3 Sales of new-to-market and new-to-firm innovations

6,3

0,036

Èíäèêàòîðû èííîâàöèîííîé äåÿòåëüíîñòè 2017, c. 125

Notes: - numbers from EIS 2017

report; - calculations based on data from other sources.

2) Aggregating GII and EIS scores into a database. Determining input and output values.

a. For GII, input values include pillars 1-5; output values - pillars 6-7.

b. For EIS, determining input and output values requires recombining the indicators within the sub-indices. One of the problems with the new 2017 EIS measurement framework compared to the old one is that the new one is not aligned with classic efficiency understanding (transferring inputs into outputs). In 2016 EIS measurement framework, there used to be three sub-indices: “Enablers”, “Firm activities” and “Outputs” - therefore, first two could be easily interpreted as input values in DEA terms, and the third one - as an output value. The current measurement framework has no specific output sub-index (see Table 5), thus, there are only two ways to determine inputs and outputs in such index composition:

1. To use individual EIS indicators as inputs and outputs. The major drawback of this solution is that it inevitably leads to substantially distorted, inflated DEA scores (Kotsemir, 2013).

2. To recombine the indicators, creating a separate sub-index for outputs. The revised measurement framework, as well as the 2016 and 2017 editions, is presented in Table 5.

Table 5: EIS measurement frameworks.

Measurement frameworks

EIS 2016

EIS 2017

Revised

1. Enablers

1. Framework conditions

1. Framework conditions

1.1. Human resources

1.1. Human resources

1.1. Human resources

1.1.1. New doctorate graduates

1.1.1. New doctorate graduates

1.1.1. New doctorate graduates

1.1.2. Population completed tertiary education

1.1.2. Population completed tertiary education

1.1.2. Population completed tertiary education

1.1.3. Youth with at least upper secondary education

1.1.3. Lifelong learning

1.1.3. Lifelong learning

1.2. Attractive research systems

1.2. Attractive research systems

1.2. Open, excellent research systems

1.2.1. International scientific co-publications

1.2.1. International scientific co-publications

1.2.1. International scientific co-publications

1.2.2. Top 10% most cited publications

1.2.2. Top 10% most cited publications

1.2.3. Foreign doctorate students

1.2.3. Foreign doctorate students

1.2.2. Top 10% most cited publications

1.3. Innovation-friendly environment

1.3. Innovation-friendly environment

1.3.1. Broadband penetration

1.3.1. Broadband penetration

1.2.3. Non-EU doctorate students

1.3.2. Opportunity-driven entrepreneurship

1.3.2. Opportunity-driven entrepreneurship

1.3. Finance and support

1.3.1. Public R&D expenditure

2. Investments

2. Investments

1.3.2. Venture capital expenditures

2.1. Finance and support

2.1. Finance and support

2.1.1. Public R&D expenditure

2.1.1. Public R&D expenditure

2. Firm activities

2.1.2. Venture capital expenditures

2.1.2. Venture capital expenditures

2.1. Firm investments

2.2. Firm investments

2.2. Firm investments

2.1.1. Business R&D expenditure

2.2.1. Business R&D expenditure

2.2.1. Business R&D expenditure

2.1.2. Non-R&D innovation expenditures

2.2.2. Non-R&D innovation expenditures

2.2.2. Non-R&D innovation expenditures

2.2. Linkages & entrepreneurship

2.2.3. Enterprises providing ICT training

2.2.3. Enterprises providing ICT training

2.2.1. SMEs innovating in-house

3. Innovation activities

3. Innovation activities

2.2.2. Innovative SMEs collaborating with others

3.1. Innovators

3.2. Linkages

3.1.1. SMEs with product/ process innovations

3.2.1. Innovative SMEs collaborating with others

2.2.3. Public-private co-publications

2.3. Intellectual assets

3.1.2. SMEs with marketing/ organisational innovations

3.2.2. Public-private co-publications

2.3.1 PCT patent applications

3.2.3. Private co-funding of public R&D expenditures

2.3.2. PCT patent applications in societal challenges

3.1.3. SMEs innovating in-house

3.2. Linkages

3.1.3. SMEs innovating in-house

2.3.3. Trademarks applications

3.2.1. Innovative SMEs collaborating with others

3.3. Intellectual assets

2.3.4. Design applications

3.3.1. PCT patent applications

3. Outputs

3.2.2. Public-private co-publications

3.3.2. Trademarks applications

3.1. Innovators

3.2.3. Private co-funding of public R&D expenditures

3.3.3. Design applications

3.1.1. SMEs with product/ process innovations

4. Impacts

3.3. Intellectual assets

3.1. Innovators

3.1.2. SMEs with marketing/ organisational innovations

3.3.1. PCT patent applications

3.1.1. SMEs with product/ process innovations

3.3.2. Trademarks applications

3.1.3. Employment fast-growing enterprises innovative sectors

3.3.3. Design applications

3.1.2. SMEs with marketing/ organisational innovations

4. Impacts

3.2. Economic effects

4.1. Employment impacts

4.1. Employment impacts

3.2.1. Employment in knowledge-intensive activities

4.1.1. Employment in knowledge-intensive activities

4.1.1. Employment in knowledge-intensive activities

3.2.2. Medium & high tech product exports

4.1.2. Employment fast-growing enterprises innovative sectors

4.1.2. Employment fast-growing enterprises innovative sectors

3.2.3. Knowledge-intensive services exports

4.2. Sales impact

4.2. Sales impact

4.2.1. Medium & high tech product exports

4.2.1. Medium & high tech product exports

3.2.4. Sales of new-to-market and new-to-firm innovations

4.2.2. Knowledge-intensive services exports

4.2.2. Knowledge-intensive services exports

3.2.5. License and patent revenues from abroad

4.2.3. Sales of new-to-market and new-to-firm innovations

4.2.3. Sales of new-to-market and new-to-firm innovations

The final dataset used for DEA tests is presented in Table 6.

Table 6: Final dataset.

Country names

GII

EIS

Inputs

Outputs

Inputs

Outputs

gii1

gii2

gii3

gii4

gii5

gii6

gii7

eis1

eis2

eis3

eis4

Austria

0,87

0,61

0,63

0,53

0,50

0,38

0,48

0,57

0,63

0,59

0,54

Belgium

0,81

0,60

0,57

0,52

0,49

0,33

0,47

0,62

0,56

0,56

0,55

Bulgaria

0,67

0,34

0,52

0,44

0,41

0,32

0,44

0,22

0,18

0,23

0,26

Croatia

0,69

0,37

0,56

0,42

0,35

0,25

0,38

0,23

0,38

0,21

0,27

Cyprus

0,81

0,40

0,48

0,58

0,43

0,41

0,38

0,40

0,23

0,36

0,39

Czech Republic

0,78

0,48

0,57

0,50

0,46

0,46

0,47

0,37

0,46

0,29

0,51

Denmark

0,91

0,66

0,63

0,70

0,53

0,44

0,54

0,90

0,57

0,55

0,54

Estonia

0,81

0,42

0,64

0,55

0,43

0,36

0,54

0,45

0,47

0,31

0,31

Finland

0,92

0,66

0,64

0,62

0,60

0,49

0,47

0,75

0,65

0,57

0,52

France

0,81

0,58

0,63

0,64

0,51

0,39

0,51

0,58

0,47

0,41

0,61

Germany

0,84

0,60

0,62

0,60

0,51

0,51

0,56

0,46

0,66

0,58

0,69

Greece

0,65

0,56

0,48

0,50

0,29

0,20

0,36

0,31

0,27

0,31

0,41

Hungary

0,71

0,40

0,52

0,42

0,38

0,32

0,38

0,28

0,32

0,21

0,49

Iceland

0,87

0,49

0,60

0,55

0,50

0,40

0,63

0,72

0,62

0,47

0,51

Ireland

0,88

0,55

0,62

0,55

0,55

0,56

0,51

0,58

0,41

0,35

0,85

Israel

0,68

0,57

0,58

0,62

0,62

0,50

0,44

0,48

0,51

0,46

0,63

Italy

0,72

0,46

0,62

0,53

0,40

0,36

0,43

0,34

0,27

0,35

0,46

Latvia

0,78

0,35

0,53

0,52

0,38

0,27

0,49

0,37

0,28

0,18

0,28

Lithuania

0,74

0,38

0,57

0,53

0,38

0,21

0,40

0,39

0,47

0,37

0,30

Luxembourg

0,83

0,43

0,60

0,43

0,58

0,45

0,66

0,73

0,36

0,47

0,68

Macedonia

0,69

0,30

0,42

0,47

0,35

0,19

0,34

0,18

0,18

0,10

0,41

Malta

0,78

0,42

0,61

0,45

0,49

0,37

0,56

0,33

0,22

0,38

0,48

Netherlands

0,88

0,55

0,63

0,59

0,64

0,63

0,59

0,74

0,47

0,59

0,62

Norway

0,92

0,53

0,69

0,57

0,48

0,38

0,47

0,73

0,58

0,40

0,49

Poland

0,76

0,37

0,53

0,48

0,37

0,28

0,40

0,26

0,33

0,19

0,29

Portugal

0,81

0,48

0,54

0,51

0,35

0,30

0,47

0,50

0,41

0,27

0,41

Romania

0,69

0,31

0,55

0,44

0,33

0,31

0,33

0,22

0,07

0,10

0,23

Russia

0,56

0,50

0,48

0,47

0,40

0,28

0,31

0,39

0,28

0,14

0,28

Serbia

0,68

0,34

0,50

0,39

0,29

0,25

0,29

0,19

0,43

0,19

0,44

Slovakia

0,75

0,34

0,55

0,46

0,38

0,34

0,41

0,30

0,34

0,21

0,51

Slovenia

0,81

0,49

0,55

0,43

0,43

0,28

0,46

0,54

0,44

0,43

0,44

Spain

0,76

0,49

0,64

0,59

0,38

0,36

0,44

0,47

0,33

0,27

0,41

Sweden

0,88

0,64

0,69

0,65

0,63

0,63

0,53

0,86

0,69

0,55

0,62

Switzerland

0,90

0,63

0,65

0,68

0,63

0,69

0,63

0,93

0,75

0,68

0,76

Turkey

0,51

0,38

0,46

0,48

0,29

0,28

0,43

0,23

0,53

0,20

0,34

Ukraine

0,48

0,40

0,39

0,43

0,35

0,33

0,36

0,13

0,15

0,09

0,18

United Kingdom

0,88

0,63

0,67

0,70

0,52

0,47

0,61

0,68

0,49

0,43

0,77

Notes: for the reasons of compatibility, GII scores were divided by 100.

3) Running DEA tests for GII and EIS under variable and constant returns to scale (VRS and CRS respectively).

Note: package “TFDEA” for R software environment was used to conduct the tests. The variable names used for the tests are presented in Table 7.

Table 7: List of variables.

country

Column of country names included in the final sample

gii1

Scores for GII pillar “Institutions”. Included in input values.

gii2

Scores for GII pillar “Human capital and research”. Included in input values.

gii3

Scores for GII pillar “Infrastructure”. Included in input values.

gii4

Scores for GII pillar “Market sophistication”. Included in input values.

gii5

Scores for GII pillar “Business sophistication”. Included in input values.

gii6

Scores for GII pillar “Knowledge and technology outputs”. Included in output values.

gii7

Scores for GII pillar “Creative outputs”. Included in output values.

eis1

Scores for EIS sub-index “Framework conditions”. Included in input values.

eis2

Scores for EIS sub-index “Investments”. Included in input values.

eis3

Scores for EIS sub-index “Innovation activities”. Included in input values.

eis4

Scores for EIS sub-index “Impacts”. Included in output values.

The function “DEA” returns efficiency scores for each country, lambda matrix (the matrix of values for constraint coefficients in the corresponding linear optimization problem), radial input and output slacks. Input orientation was chosen for the test in accordance to the assumption that countries are more able to control inputs, rather than outputs (Nasierowski and Arcelus, 2012). They will be discussed in the “Findings section”.

4) Calculating scale efficiency scores;

Note: Scale efficiency = CRS efficiency/VRS efficiency (Guan and Chen, 2012).

5) Running correlation tests on VRS, CRS and scale efficiency scores for GII and EIS.

Note: “cor.test” function from R software environment was used to conduct the tests.

6) Calculating super-efficiency scores and ranking the DMUs;

Note: “SDEA” function from R software environment was used to calculate super-efficiency scores.

7) Running correlation tests on super-efficiency scores for GII and EIS.

Note: Due to the presence of infinite values, it is impossible to run Pearson correlation test on super-efficiency scores. Therefore, only Spearman rank correlation test was used.

Findings

Table 8 presents VRS, CRS and scale efficiency scores of GII and EIS.

Table 8: Efficiency scores.

Country

GII

EIS

VRS

CRS

Scale

VRS

CRS

Scale

Austria

0,857369

0,822339

0,959142

0,513366

0,41534

0,809052

Belgium

0,848501

0,797888

0,940351

0,492886

0,421851

0,855879

Bulgaria

1

0,930837

0,930837

0,764698

0,606322

0,792891

Croatia

0,973639

0,829261

0,851712

0,650284

0,510974

0,785771

Cyprus

1

0,926678

0,926678

0,697368

0,645854

0,92613

Czech Republic

1

0,954291

0,954291

0,713109

0,603203

0,845878

Denmark

0,860559

0,858992

0,99818

0,423621

0,371503

0,876969

Estonia

0,980074

0,979885

0,999807

0,353418

0,302262

0,855252

Finland

0,832809

0,772285

0,927325

0,372856

0,340405

0,912966

France

0,819386

0,817168

0,997294

0,622787

0,543528

0,872736

Germany

0,967319

0,966237

0,998881

0,930011

0,654776

0,704052

Greece

1

0,832173

0,832173

0,659586

0,646877

0,980732

Hungary

0,998323

0,878076

0,879552

0,886326

0,766845

0,865196

Iceland

1

1

1

0,376165

0,349492

0,929092

Ireland

1

0,975971

0,975971

1

0,838387

0,838387

Israel

0,988997

0,946143

0,95667

0,776179

0,575744

0,741767

Italy

0,930498

0,887688

0,953992

0,754478

0,709958

0,940991

Latvia

1

1

1

0,531326

0,42042

0,791266

Lithuania

0,891491

0,789699

0,885818

0,400223

0,336798

0,841527

Luxembourg

1

1

1

0,879928

0,684079

0,777426

Macedonia

1

0,794767

0,794767

1

1

1

Malta

0,968622

0,947014

0,977692

0,958678

0,868225

0,905649

Netherlands

1

1

1

0,603981

0,517721

0,857182

Norway

0,833106

0,809029

0,971099

0,380623

0,352096

0,925052

Poland

0,931852

0,85175

0,91404

0,591973

0,487945

0,824269

Portugal

0,937893

0,936889

0,99893

0,432645

0,419986

0,970741

Romania

1

0,947327

0,947327

1

1

1

Russia

0,903699

0,7239

0,801042

0,673913

0,487805

0,723839

Serbia

1

0,828777

0,828777

1

1

1

Slovakia

0,98817

0,952783

0,964189

0,886077

0,745029

0,840817

Slovenia

0,964776

0,867294

0,898959

0,439404

0,419343

0,954347

Spain

0,944817

0,926586

0,980705

0,529186

0,501518

0,947715

Sweden

0,950192

0,932805

0,981702

0,430657

0,375387

0,871661

Switzerland

1

1

1

0,532776

0,423871

0,795589

Turkey

1

1

1

0,716446

0,64143

0,895294

Ukraine

1

0,986696

0,986696

1

0,606647

0,606647

United Kingdom

0,975461

0,923669

0,946905

0,750863

0,638749

0,850686

Tables 9 and 10 represent the results of correlation tests. According to both Spearman and Pearson correlation tests, only VRS efficiency scores of GII and EIS have statistically significant positive correlation coefficients.

Table 9: Pearson's correlation between GII and EIS efficiency scores

Table 10: Spearman's rank correlation between GII and EIS efficiency scores

Pearson's product-moment correlation

Spearman's rank correlation

VRS

CRS

Scale

VRS

CRS

Scale

t:

3,74

0,63

-1,31

S:

3948,8

7002,7

10097

df:

35

35

35

P-value:

0,0007

0,31

0,24

P-value:

0,0007

0,53

0,2

ñ:

0,53

0,17

-0,19

95% conf. interval:

[0,25; 0,73]

[-0,23; 0,42]

[-0,51; 0,12]

r:

0,54

0,11

-0,22

The weights assigned to DMUs' inputs and outputs indicate the contribution of each input and output to the final DEA score. They are presented in Table 11.

Table 11: Input and output weights under variable returns to scale

gii1

gii2

gii3

gii4

gii5

gii6

gii7

eis1

eis2

eis3

eis4

Austria

0

0

0

0,88

1,05

1,01

0,39

1,64

0,12

0

1,56

Belgium

0

0

0,37

1,09

0,45

0

0,60

1,52

0,11

0

1,45

Bulgaria

0

1,13

0

0,75

0,70

0

0,83

2,57

2,32

0

0,82

Croatia

0,28

0

0

1,54

0,45

0

0,66

4,38

0

0

0,97

Cyprus

0

0,85

0,39

0,29

0,71

1,56

0

0,19

4,04

0

2,35

Czech Republic

0

0

0

0,61

1,52

1,44

0,15

2,47

0,17

0

2,35

Denmark

0

0

1,22

0

0,44

0,47

0,97

0,08

1,62

0

0,94

Estonia

0

1,12

0

0

1,23

1,30

0,80

2,24

0

0

0,49

Finland

0

0

0,27

1,04

0,32

1,06

0

1,26

0,09

0

1,20

France

0

0

1,22

0

0,44

0,48

0,98

1,62

0,11

0

1,55

Germany

0

0

0

0,37

1,51

0,84

1,20

2,18

0

0

2,08

Greece

0,10

0

0

0,05

3,16

0

0

0,17

3,51

0

2,04

Hungary

0,01

0

0

1,66

0,80

1,53

0

3,28

0,23

0

3,12

Iceland

0

0

0

0,54

1,41

0

1,69

1,32

0,09

0

1,26

Ireland

0

0

0

0,61

1,22

1,33

0

1,64

0,12

0

1,57

Israel

1,47

0

0

0

0

1,69

0

1,94

0,14

0

1,85

Italy

0

0

0

0,64

1,68

1,59

0,14

0,17

3,52

0

2,04

Latvia

0,18

1,15

0

0

1,20

0

2,02

0

0,37

5,10

0,15

Lithuania

0,11

1,47

0

0,32

0,54

0

0,80

2,57

0

0

0,57

Luxembourg

0

0

0

0,59

1,29

0

1,52

0

2,77

0

1,53

Macedonia

0,72

1,56

0

0,09

0

0

0

1,10

4,55

0

2,45

Malta

0

0,80

0

0,82

0,60

0

0,77

0,20

4,21

0

2,44

Netherlands

0

0

0

1,22

0,44

1,59

0

0

2,15

0

1,19

Norway

0

0

0

0,87

1,04

0,99

0,38

0

1,61

0,15

0,95

Poland

0

1,12

0,11

0,64

0,60

0

0,72

3,90

0

0

0,86

Portugal

0

0

0

0,70

1,82

0

2,18

0

2,32

0,21

1,37

Romania

0

1,18

0,58

0,73

0

0,50

0

3,38

3,75

0

0

Russia

0,38

0

0

1,67

0

0

0

0

0

6,94

0,15

Serbia

0,53

0

0

0,95

0,96

0

0

5,16

0

0

2,29

Slovakia

0

1,31

0,56

0

0,62

1,10

0,58

3,09

0,22

0

2,95

Slovenia

0

0

0,43

1,25

0,52

0

0,69

0,10

2,13

0

1,23

Spain

0

0

0

0

2,60

2,09

0

0,14

2,85

0

1,65

Sweden

0,17

0

0

0,85

0,47

1,12

0

1,10

0,08

0

1,05

Switzerland

0

0

0

0

1,60

1,45

0

1,01

0,07

0

0,97

Turkey

0,53

0

0

0,95

0,96

0

0

4,35

0

0

0,96

Ukraine

0,42

0

0

1,85

0

0

0

3,38

3,75

0

0

United Kingdom

0

0

0

0

1,92

0,86

1,47

1,40

0,10

0

1,34

These weights, unlike the original ones used in GII and EIS, bring DMUs (countries) to maximal efficiency with given inputs and outputs. The optimal sets of weights for Russia for both GII and EIS include zero weights. This means that before these inputs and outputs can contribute to improve efficiency of Russia it is necessary to reduce their slacks (Cooper et al., 2007). Nevertheless, even with these optimal weights Russia does not reach the efficiency frontier.

Super-efficiency scores allow ranking both efficient and inefficient units. Variable returns to scale were chosen for this calculation. Table 12 presents the results of super-efficiency estimation. Infinite scores for hyper-efficient units are marked as ? in the table. Luxembourg and Switzerland are hyper-efficient in terms of GII scores, while Ireland is hyper-efficient in terms of EIS scores. Notably, both Luxembourg and Switzerland are inefficient in EIS calculation while Ireland is still efficient in GII. Russia holds very different positions in the super-efficiency scores rankings - 30th for GII and 18th for EIS.

The analysis of lambda matrices for GII and EIS efficiency calculations showed that for Russia in GII-based calculation Serbia and Ukraine served as peers, and in EIS-based - Macedonia and Ukraine. It is noteworthy that in both cases Russian peers are East-European countries with some of the lowest scores in both indices.

Table 12: Super-efficiency scores, ranks and reference sets

Score

Rank

Times as peers

Reference set

Country

GII

EIS

GII

EIS

GII

EIS

GII

EIS

Austria

AT

0,86

0,51

33

27

0

0

CZ, LU, RS, TR, UA

IE, MK, RS

Belgium

BE

0,85

0,49

34

28

0

0

LU, RS, TR, UA

IE, MK, RS

Bulgaria

BG

1,04

0,76

12

12

5

0

LV, LU, MK, RO, TR

MK, RO, UA

Croatia

HR

0,97

0,65

21

20

1

0

LU, RS, TR, UA

MK, RS, UA

Cyprus

CY

1,05

0,70

11

17

2

0

NL, RO, UA

MK

Czech Republic

CZ

1,01

0,71

14

16

3

0

IE, LU, RO, TR, UA

IE, MK, RO

Denmark

DK

0,86

0,42

32

32

0

0

LU, CH, TR, UA

IE, MK, RO

Estonia

EE

0,98

0,35

19

37

0

0

LU, RO, CH, TR

MK, RO, UA

Finland

FI

0,83

0,37

36

36

0

0

IE, LU, NL, UA

IE, MK, RS

France

FR

0,82

0,62

37

21

0

0

LU, CH, TR, UA

IE, MK, RS

Germany

DE

0,97

0,93

23

7

0

0

IS, LU, CH, TR

IE, RS

Greece

EL

1,01

0,66

15

19

0

0

RS, TR

IE, MK, RO

Hungary

HU

1,00

0,89

16

8

0

0

IE, LU, RS, UA

IE, MK, RS

Iceland

IS

1,09

0,38

8

35

4

0

LU, TR

IE, MK, RS

Ireland

IE

1,02

?

13

1

4

24

CZ, NL, CH

-

Israel

IL

0,99

0,78

17

11

0

0

CH, UA

IE, MK, RS

Italy

IT

0,93

0,75

29

13

0

0

RO, CH, TR, UA

IE, MK, RO

Latvia

LV

1,07

0,53

9

25

3

0

BG, LU, TR

MK, RO, UA

Lithuania

LT

0,89

0,40

31

33

0

0

BG, LV, MK, RO, TR

MK, UA

Luxembourg

LU

?

0,88

1

10

18

0

-

IE, RO

Macedonia

MK

1,15

1,71

5

3

4

32

RO, UA

RO, SK

Malta

MT

0,97

0,96

22

6

0

0

BG, LU, RS, TR

IE, MK, RO

Netherlands

NL

1,06

0,60

10

22

4

0

LU, RO, CH

IE, RO

Norway

NO

0,83

0,38

35

34

0

0

CZ, LU, RS, TR

IE, MK, RO

Poland

PL

0,93

0,59

28

23

0

0

BG, MK, RO, RS, TR

MK, UA

Portugal

PT

0,94

0,43

27

30

0

0

IS, LU, TR

IE, MK, RO

Romania

RO

1,11

2,24

7

2

10

16

MK, RS, SK

MK, UA

Russia

RU

0,90

0,67

30

18

0

0

RS, UA

MK, UA

Serbia

RS

1,14

1,09

6

5

11

12

HR, RS

IE, MK

Slovakia

SK

0,99

0,89

18

9

1

1

BG, LU, RO, TR

IE, MK, RS

Slovenia

SI

0,96

0,44

24

29

0

0

CY, LU, RS, TR, UA

IE, MK, RO

Spain

ES

0,94

0,53

26

26

0

0

CH, TR

IE, MK, RO

Sweden

SE

0,95

0,43

25

31

0

0

IE, NL, CH, UA

IE, MK, RS

Switzerland

CH

?

0,53

1

24

11

0

-

IE, MK, RS

Turkey

TR

1,27

0,72

3

15

24

0

IS, LV, UA

MK, UA

Ukraine

UA

1,21

1,38

4

4

16

9

CY, CH, TR

MK

UK

UK

0,98

0,75

20

14

0

0

IS, CH, TR

IE, MK, RO

Table 13 presents the results of Spearman correlation test on GII and EIS super-efficiency scores. It indicates a statistically significant positive correlation coefficient of 53%.

Table 13: Spearman's rank correlation between GII and EIS super-efficiency scores

Spearman's rank correlation

S:

3963,7

P-value:

0,0007

ñ:

0,53

Finally, the analysis of slacks was performed. Table 14 presents the slacks for each output and input of GII and EIS. For Russia, non-zero slacks were obtained for three input values (“Human capital and research”, “Infrastructure”, “Business sophistication”) and both output values (“Knowledge and technology outputs”, “Creative outputs”) in GII and two input values (“Framework conditions”, “Investments”) in EIS. It is noteworthy that Russia is the only country in the set that got non-zero slacks in both output values of GII.

The developed countries - such as Germany, France, United Kingdom, Belgium etc. - that received efficiency scores lower than 1 did so mostly due to the presence of slacks in input values, especially the first two of the GII - “Institutions” and “Human capital and research” and the third one in EIS - “Innovation activities”.

Table 14: Radial input and output slacks

gii1

gii2

gii3

gii4

gii5

gii6

gii7

eis1

eis2

eis3

eis4

Austria

0,03

0,10

0

0

0

0

0

0

0

0,11

0

Belgium

0,07

0,11

0

0

0

0,02

0

0

0

0,09

0

Bulgaria

0

0

0

0

0

0

0

0

0

0,08

0

Croatia

0

0,00

0,04

0

0

0,03

0

0

0,09

0,04

0

Cyprus

0

0

0

0

0

0

0

0

0

0,10

0

Czechia

0

0

0

0

0

0

0

0

0

0,02

0

Denmark

0,10

0,10

0

0,07

0

0

0

0

0

0,02

0

Estonia

0,14

0

0,10

0,08

0

0

0

0

0

0,01

0

Finland

0,04

0,07

0

0

0

0

0,01

0

0

0,05

0

France

0,02

0,04

0

0,03

0

0

0

0

0

0,04

0

Germany

0,06

0,06

0,02

0

0

0

0

0

0,20

0,25

0

Greece

0

0

0

0

0

0

0

0

0

0,09

0

Hungary

0

0,01

0,00

0

0

0

0,01

0

0

0,02

0

Iceland

0

0

0

0

0

0

0

0

0

0,02

0

Ireland

0

0

0

0

0

0

0

0

0

0

0

Israel

0

0,05

0,06

0,06

0,13

0

0,04

0

0

0,10

0

Italy

0,10

0,01

0,11

0

0

0

0

0

0

0,12

0

Latvia

0

0

0

0

0

0

0

0,03

0

0

0

Lithuania

0

0

0,01

0

0

0,06

0

0

0,02

0,05

0

Luxembourg

0

0

0

0

0

0

0

0,16

0

0,13

0

Macedonia

0

0

0

0

0

0

0

0

0

0

0

Malta

0,02

0

0,03

0

0

0,02

0

0

0

0,19

0

Netherlands

0

0

0

0

0

0

0

0,001

0

0,10

0

Norway

0,09

0,02

0,05

0

0

0

0

0,01

0

0

0

Poland

0,07

0

0

0

0

0,01

0

0

0,03

0,02

0

Portugal

0,20

0,05

0,03

0

0

0,00

0

0,003

0

0

0

Romania

0

0

0

0

0

0

0

0

0

0

0

Russia

0

0,06

0,02

0

0,02

0,04

0,04

0,11

0,03

0

0

Serbia

0

0

0

0

0

0

0

0

0

0

0

Slovakia

0,04

0

0

0,005

0

0

0

0

0

0,01

0

Slovenia

0,06

0,09

0

0

0

0,06

0

0

0

0,06

0

Spain

0,13

0,03

0,11

0,04

0

0

0,03

0

0

0,01

0

Sweden

0

0,03

0,04

0

0

0

0,05

0

0

0,02

0

Switzerland

0

0

0

0

0

0

0

0

0

0,05

0

Turkey

0

0

0

0

0

0

0

0

0,21

0,05

0

Ukraine

0

0

0

0

0

0

0

0

0

0

0

UK

0,03

0,10

0,06

0,11

0

0

0

0

0

0,02

0

Discussion and Conclusion

· Confirmation of hypothesis

Considering the key hypothesis of this study, the presence of positive statistically significant correlation between GII and EIS efficiency and super-efficiency scores under VRS allows to accept the H0 hypothesis. This can be viewed as an argument in favor for the proposition stated in the introductory part: GII and EIS are based on similar efficiency concepts, thus, these composite innovation indexes can be used equivalently in practical and policymaking activities.

The history and composition of these indexes can also be interpreted as supporting this general statement. The frameworks and purposes of these two indexes is seemingly contrasting: GII is more focused on widening the country selection and covering as many aspects of innovation as possible, including even the “elusive” creativity. Aspiring to improve innovation measurement, it employs a variety of data sources and types (both “soft” survey data and “hard” statistical data) and even sheds light on the efficiency of innovation processes within the studied economies. EIS from the very beginning was aimed on providing a consistent and comparable time-series for tracking the progress of EU member states in achieving strategic innovative development goals. It relies mostly on hard Eurostat data and preserves a methodology consistent enough to track the scores back for at least an eight-year period.

However, these indexes were not developed in total isolation from each other. On the contrary, several scholars and institutions were involved in the elaboration of both as advisory board members or statistical auditors (Hugo Hollanders, Daniel Vértesy, JRC etc.). While doing so, they might have transferred several underlying concepts or assumptions from one framework to another.

Both indexes aim to be used in policymaking practice. As stated in GII 2017 report, GII is designed for “identifying targeted policies and good practices that foster innovation” (Cornell University et al., 2017). EIS, on the other hand, was conceived as an innovation performance monitoring instrument in the toolkit of EU members that would identify their advancement or lagging (Rodriguez et al., 2010).

The choice of indicators and their grouping for GII and EIS also have certain similarities: both indexes incorporate both output and input dimensions of innovation activity. The output measures in both cases partly rely on data on intellectual property and high-tech exports. The inputs include education indicators as part of human resources' dimension, R&D expenditures, ICT development measures as “environmental” indicators. Interestingly, GII and EIS treat citation- and publication-based indicators differently: GII places them in output sub-index, while EIS view them as input measures. Moreover, GII and EIS use same data sources for several indicators (OECD, WIPO). Thereby, the affinity of these indexes is an expected result.

How should one choose between EIS and GII then? The answer may lie in the aforementioned differences between the frameworks. GII provides a broader view on innovation and evaluates a great number of countries. EIS keeps a consistent and “compact” methodology based on solid, verified indicators. GII is useful for creating a “big picture”, for comparing different NISs around the world. If longevity and continuity is a priority, EIS is a better choice. This index can also be interesting for countries outside the EU but willing to join it or viewing it as a main partner/competitor.

· Study results and the existing literature

Looking at the results provided by DEA tests in detail, some of them may seem counterintuitive. For instance, the maximum efficiency scores and high super-efficiency ranks of such countries as Ukraine, Serbia and Macedonia look inflated, especially while United Kingdom, France, Denmark and other developed economies received low ranks and efficiency scores lower than 1. (Hollanders and Celikel-Esser, 2007) provided an explanation to this effect, interpreting it as a “statistical artefact” resulting from extremely low absolute input and output values. This could be overcome if countries - “outliers” were identified and excluded from the country set, but for the purposes of this study the largest possible selection of countries was studied.

The analysis of slacks for inefficient developed countries showed that the main sources of their inefficiency were the inputs of institutions and human capital in GII and innovation activities in EIS. From GII point of view, this may indicate that these countries have smaller innovation outputs than may be expected in institutional conditions that are exceptionally good and with human resources that are so well-developed. Similarly, from EIS point of view, this may mean that with innovative activity so intense as in these countries their innovation outputs might be greater if they carried it out more efficiently.

Interestingly, some of the results can be backed by theoretical studies made by reputed scholars in the field, such as (Edquist and Hommen, 2009). In their work, they study in-depth the case of Ireland - the so-called “Celtic tiger” (analogous to “Asian tigers”) - as an example of efficient and fast-growing innovator, and here in this paper Ireland turned out to be efficient in terms of both indexes and hyper-efficient - in terms of EIS. Sweden, Norway, Netherlands, Finland and Denmark, in turn, were studied as cases of slow growth, with particular attention to “Swedish paradox” - a situation when high levels of R&D investments and innovation activity result in considerably small innovation output (in other words, innovation inefficiency). All these countries, except for Netherlands, were marked as inefficient by both GII and EIS DEA scores. Netherlands received an efficient score for GII, but not EIS.

This study also confirms the importance of taking into account both sides of innovation performance - effectiveness and efficiency. Composite innovation indexes reflect only the effectiveness aspect: that is why they were described in many studies such as (Edquist and Zabala-Iturriagagoitia, 2015) as “flawed” and “misleading”. Accompanied with efficiency measurements (like DEA scores) they provide a more comprehensive view on the subject, not

Therefore, although this study has its drawbacks and limitations (that will be discussed further in more detail), it goes in line with the existing literature on the topic and thus may contribute to it.

· Policy recommendations for Russia

Russia represents a specific case in this study. Its territory is nearly 22 times bigger than the second biggest country in the set - Turkey, has the richest natural resources and at the same time it is one of the most scarcely populated. It is not surprising, therefore, that innovation processes in Russia unfold in a way that is untypical for small countries in the set. However, since EU and neighboring countries are the crucial partners for Russia, benchmarking its innovation performance against them is a useful exercise.

In this sense, including Russia in the EIS ranking is a result that may be valuable for innovation performance evaluation practice. The data sources substituting Eurostat that were identified for this task consistently provide data of relevant quality and can be used on regular basis for further EIS recalculation exercises.

Considering the results of DEA tests, Russia turned out to be inefficient in terms of both indexes due to weak performance in both input and output values. The presence of slacks indicates that Russia has inputs greater than minimal required amount for the results achieved and outputs lower than the possible maximum with such inputs. Input slacks should not be taken literally - it does not mean that Russia should get rid of its human capital, research infrastructure, framework conditions and R&D investments. In only implies that in the course of Russian NIS activity the existing resources and capacities are either not used or used improperly.

The positions of Russia in super-efficiency rankings of GII and EIS are considerably different: 30th in GII and 18th in EIS. This discrepancy may be caused by the differences in index frameworks - GII with its abundance of infrastructural indicators not directly related to innovation may capture more shortfalls in Russian overall socio-economic performance. EIS, on the other hand, shows a more precise measure of Russian innovation performance.

In sum, Russian position in terms of both efficiency and effectiveness of innovation process can be described as unfavorable. The presence of East European countries with very low input values in its reference sets indicate a need to reconsider the way it uses its resources. Inefficiency in terms of both GII and EIS signifies that inputs into Russian innovation system are excessive for such unsatisfactory outputs. It is something that happens in the “black box” of NIS operation that leads to unimpressive pay-offs.

· Limitations and considerations for the future

The first and most unsurmountable limitation of this research is the use of data source substitutes for recalculation of EIS scores for Russia. The goodness of fit of this substitution significantly improved in 2017 when 16 indicator values for Russia were included in EIS 2017 report. However, there is still a possibility that data collection procedures for some of the substituted Russian values do not fully match the original Eurostat. As a result, the final EIS scores for Russia may not be 100% valid.

Furthermore, as it was mentioned above, the presence of “statistical artefacts” in the results of DEA tests may be explained by the flaws of the country set since DEA is highly sensitive to outlier DMUs and statistical noise. In further research, this may be overcome by special data treatment procedures prior to conduction of DEA tests.

There is also a problem that frequently emerges in classical DEA tests: the presence of zero weights that complicates their interpretation. In future research it can be solved by imposing constraints on the ratio of input and output weights that eliminates zero weights but requires careful choice of upper and lower bounds (Cooper et al., 2007).

Another limitation is the use of correlation tests as a means of verifying the hypothesis of the research. The presence of positive correlation between DEA scores of GII and EIS in 2017 may not be an indication of the equivalence of these indexes, it may be random and accidental. To provide a more solid confirmation of the hypothesis it is necessary to study time series, which will require methods that are more complex. Additionally, it will allow tracing the development of both indexes and finding whether their frameworks have been converging or diverging with time.

References

1. Andersen, P., & Petersen, N. C. (1993). A Procedure for Ranking Efficient Units in Data Envelopment Analysis. Management Science, 39(10), 1261-1264.

2. Archibugi, D., Denni, M., & Filippetti, A. (2009). The technological capabilities of nations: The state of the art of synthetic indicators. Technological Forecasting and Social Change, 76(7), 917-931. https://doi.org/10.1016/j.techfore.2009.01.002

3. Banker, R. D., Charnes, A., & Cooper, W. W. (1984). Some Models for Estimating Technical and Scale Inefficiencies in Data Envelopment Analysis. Management Science, 30(9), 1078-1092.

4. Blackman, A. W., Seligman, E. J., & Sogliero, G. C. (1973). An innovation index based on factor analysis. Technological Forecasting and Social Change, 4(3), 301-316. https://doi.org/10.1016/0040-1625(73)90060-7

5. Bogetoft, P., & Otto, L. (2010). Benchmarking with DEA, SFA, and R. Springer Science & Business Media.

6. Byrnes, P., Färe, R., & Grosskopf, S. (1984). Measuring Productive Efficiency: An Application to Illinois Strip Mines. Management Science, 30(6), 671-681.

7. Cai, Y. (2011). Factors affecting the efficiency of the BRICSs' national innovation systems: A comparative study based on DEA and Panel Data Analysis. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1974368

8. Carayannis, E. G., Goletsis, Y., & Grigoroudis, E. (2017). Composite innovation metrics: MCDA and the Quadruple Innovation Helix framework. Technological Forecasting and Social Change. https://doi.org/10.1016/j.techfore.2017.03.008

9. Carayannis, E. G., & Provance, M. (2008). Measuring firm innovativeness: towards a composite innovation index built on firm innovative posture, propensity and performance attributes. International Journal of Innovation and Regional Development, 1(1), 90. https://doi.org/10.1504/IJIRD.2008.016861

10. Charnes, A., Cooper, W. W., & Rhodes, E. (1979). Measuring the efficiency of decision-making units. European Journal of Operational Research, 3(4), 339. https://doi.org/10.1016/0377-2217(79)90229-7

11. Cooper, W. W., Seiford, L. M., & Tone, K. (2007). Data envelopment analysis: a comprehensive text with models, applications, references and DEA-solver software (2. ed). New York: Springer.

12. Cornell University, INSEAD, & WIPO. (2017). The Global Innovation Index 2017: Innovation Feeding the World. Ithaca, Fontainebleau, and Geneva. Retrieved from https://www.globalinnovationindex.org/gii-2017-report

13. Crespo, N. F., & Crespo, C. F. (2016). Global innovation index: Moving beyond the absolute value of ranking with a fuzzy-set analysis. Journal of Business Research, 69(11), 5265-5271. https://doi.org/10.1016/j.jbusres.2016.04.123

14. Davis, K. E., Kingsbury, B., & Merry, S. E. (2012). Indicators as a technology of global governance. Law & Society Review, 46(1), 71-104.

15. Edquist, C. (2005). Systems of Innovation: Perspectives and Challenges. In The Oxford Handbook of Innovation (pp. 181-208). Oxford University Press.

16. Edquist, C. (2011). Design of innovation policy through diagnostic analysis: identification of systemic problems (or failures). Industrial and Corporate Change, 20(6), 1725-1753. https://doi.org/10.1093/icc/dtr060

17. Edquist, C., & Hommen, L. (2009). Small country innovation systems: globalization, change and policy in Asia and Europe. Edward Elgar Publishing. Retrieved from http://books.google.com/books?hl=en&lr=&id=s9giP7KhtJ0C&oi=fnd&pg=PR1&dq=%22Chu,+Industrial+Engineering+and+Engineering%22+%22Hommen,+CIRCLE+(Centre+for+Innovation,+Research+and%22+%22Kotilainen,+The+Research+Institute+of+the+Finnish%22+%22Novikova,+European+Commission,+DG+Regional%22+%22National+Tsing+Hua+University,+Hsinchu,%22+&ots=tt8AqFciyl&sig=NkE1A5i33tyVPFb4ZIG4WoRTZcE

18. Edquist, C., & Zabala-Iturriagagoitia, J. M. (2009). Outputs of innovation systems: a European perspective. CIRCLE WP, 14. Retrieved from https://core.ac.uk/download/pdf/6625497.pdf

19. Edquist, C., & Zabala-Iturriagagoitia, J. M. (2015). The Innovation Union Scoreboard is Flawed: The case of Sweden - not being the innovation leader of the EU. Lund University, CIRCLE - Center for Innovation, Research and Competences in the Learning Economy, (2015/16). Retrieved from https://charlesedquist.files.wordpress.com/2015/05/201516_edquist_zabalaiturriagagoitia.pdf

20. Emrouznejad, A., & Yang, G. (2018). A survey and analysis of the first 40 years of scholarly literature in DEA: 1978-2016. Socio-Economic Planning Sciences, 61, 4-8. https://doi.org/10.1016/j.seps.2017.01.008

21. Fare, R., & Grosskopf, S. (2000). Reference Guide to On Front (The Professional Tool for Efficiency and Productivity Measurement.

22. Farrell, M. J. (1957). The Measurement of Productive Efficiency. Journal of the Royal Statistical Society. Series A (General), 120(3), 253-290. https://doi.org/10.2307/2343100

23. Filippetti, A., & Peyrache, A. (2011). The Patterns of Technological Capabilities of Countries: A Dual Approach using Composite Indicators and Data Envelopment Analysis. World Development, 39(7), 1108-1121. https://doi.org/10.1016/j.worlddev.2010.12.009

24. Godin, B. (2009). National Innovation System: The System Approach in Historical Perspective. Science, Technology, & Human Values, 34(4), 476-501. https://doi.org/10.1177/0162243908329187

25. Grupp, H., & Mogee, M. E. (2004). Indicators for national science and technology policy: how robust are composite indicators? Research Policy, 33(9), 1373-1384. https://doi.org/10.1016/j.respol.2004.09.007

26. Grupp, H., & Schubert, T. (2010). Review and new evidence on composite innovation indicators for evaluating national performance. Research Policy, 39(1), 67-78. https://doi.org/10.1016/j.respol.2009.10.002

27. Guan, J., & Chen, K. (2012). Modeling the relative efficiency of national innovation systems. Research Policy, 41(1), 102-115. https://doi.org/10.1016/j.respol.2011.07.001

28. Hjalmarsson, L., Kumbhakar, S. C., & Heshmati, A. (1996). DEA, DFA and SFA: A comparison. Journal of Productivity Analysis, 7(2-3), 303-327. https://doi.org/10.1007/BF00157046

29. Holgersson, T., & Kekezi, O. (2017). Towards a multivariate innovation index. Economics of Innovation and New Technology, 1-19. https://doi.org/10.1080/10438599.2017.1331788

30. Hollanders, H., & Es-Sadki, N. (2017). European Innovation Scoreboard 2017 - Methodology report. Maastricht Economic and Social Research Institute on Innovation and Technology - MERIT.

31. Hollanders, H. (2009). European Innovation Scoreboard (EIS): Evolution and Lessons Learnt. In Innovation Indicators for Latin America Workshop. Retrieved from http://www.liaison.uoc.gr/documents/articles/EIS_2010.pdf

32. Hollanders, H., & Celikel-Esser, F. (2007). Measuring innovation efficiency. Retrieved from https://cris.maastrichtuniversity.nl/portal/files/1522179/guid-46a68016-6c74-4576-b337-e00c41715756-ASSET1.0

33. Hollanders, H., & Janz, N. (2013). Scoreboards and indicator reports. In Handbook of Innovation Indicators and Measurement (pp. 279-296).

34. James, J. (2006). An institutional critique of recent attempts to measure technological capabilities across countries. Journal of Economic Issues, 40(3), 743-766.

35. Kotsemir, M. N. (2013). Measuring national innovation systems efficiency-a review of DEA approach. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2304735

36. Koz³owski, J. (2015). Innovation indices: the need for positioning them where they properly belong. Scientometrics, 104(3), 609-628. https://doi.org/10.1007/s11192-015-1632-4


Ïîäîáíûå äîêóìåíòû

  • Types of the software for project management. The reasonability for usage of outsourcing in the implementation of information systems. The efficiency of outsourcing during the process of creating basic project plan of information system implementation.

    ðåôåðàò [566,4 K], äîáàâëåí 14.02.2016

  • The concept of transnational companies. Finding ways to improve production efficiency. International money and capital markets. The difference between Eurodollar deposits and ordinary deposit in the United States. The budget in multinational companies.

    êóðñîâàÿ ðàáîòà [34,2 K], äîáàâëåí 13.04.2013

  • Investigation of the subjective approach in optimization of real business process. Software development of subject-oriented business process management systems, their modeling and perfection. Implementing subject approach, analysis of practical results.

    êîíòðîëüíàÿ ðàáîòà [18,6 K], äîáàâëåí 14.02.2016

  • Àíàëèç ôèíàíñîâîãî ñîñòîÿíèÿ ïðåäïðèÿòèÿ "EPAM Systems". Ïðèíöèïû óïðàâëåíèÿ êà÷åñòâîì, ïðèíÿòûå â êîìïàíèè. Öåíòð óïðàâëåíèÿ ïðîåêòàìè-PMC. Óïðàâëåíèå ïðîäóêöèåé, íå ñîîòâåòñòâóþùåé êà÷åñòâó. Ñîâåðøåíñòâîâàíèå ïðîöåññà ôóíêöèîíàëüíîãî òåñòèðîâàíèÿ.

    îò÷åò ïî ïðàêòèêå [50,5 K], äîáàâëåí 26.03.2012

  • Leadership and historical approach. Effect, which leader makes on group. Developing leadership skills. Exercise control as function of the leader is significant difference between managers and leaders. Common points of work of leader and manager.

    äîêëàä [37,7 K], äîáàâëåí 13.02.2012

  • Logistics as a part of the supply chain process and storage of goods, services. Logistics software from enterprise resource planning. Physical distribution of transportation management systems. Real-time system with leading-edge proprietary technology.

    êîíòðîëüíàÿ ðàáîòà [15,1 K], äîáàâëåí 18.07.2009

  • The main idea of Corporate Social Responsibility (CSR). History of CSR. Types of CSR. Profitability of CSR. Friedman’s Approach. Carroll’s Approach to CSR. Measuring of CRS. Determining factors for CSR. Increase of investment appeal of the companies.

    ðåôåðàò [98,0 K], äîáàâëåí 11.11.2014

  • Different nations negotiate with different styles. Those styles are shaped by the nation’s culture, political system and place in the world. African Approaches to Negotiation. Japanese, European, Latin American, German and British styles of Negotiation.

    ïðåçåíòàöèÿ [261,2 K], äîáàâëåí 27.10.2010

  • Evaluation of urban public transport system in Indonesia, the possibility of its effective development. Analysis of influence factors by using the Ishikawa Cause and Effect diagram and also the use of Pareto analysis. Using business process reengineering.

    êîíòðîëüíàÿ ðàáîòà [398,2 K], äîáàâëåí 21.04.2014

  • The concept, essence, characteristics, principles of organization, types and features of the formation of groups of skilled workers. The general description of ten restrictions which disturb to disclosing of potential of group staff and its productivity.

    ðåôåðàò [29,7 K], äîáàâëåí 26.07.2010

Ðàáîòû â àðõèâàõ êðàñèâî îôîðìëåíû ñîãëàñíî òðåáîâàíèÿì ÂÓÇîâ è ñîäåðæàò ðèñóíêè, äèàãðàììû, ôîðìóëû è ò.ä.
PPT, PPTX è PDF-ôàéëû ïðåäñòàâëåíû òîëüêî â àðõèâàõ.
Ðåêîìåíäóåì ñêà÷àòü ðàáîòó.